Dec 02 10:07:54 crc systemd[1]: Starting Kubernetes Kubelet... Dec 02 10:07:54 crc restorecon[4740]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:54 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 10:07:55 crc restorecon[4740]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 02 10:07:55 crc kubenswrapper[4813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 10:07:55 crc kubenswrapper[4813]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 02 10:07:55 crc kubenswrapper[4813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 10:07:55 crc kubenswrapper[4813]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 10:07:55 crc kubenswrapper[4813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 02 10:07:55 crc kubenswrapper[4813]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.909340 4813 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912199 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912217 4813 feature_gate.go:330] unrecognized feature gate: Example Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912224 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912229 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912235 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912240 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912244 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912250 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912256 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912262 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912267 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912272 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912277 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912282 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912286 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912291 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912296 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912300 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912305 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912310 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912315 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912319 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912326 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912332 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912337 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912343 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912348 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912355 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912360 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912365 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912371 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912376 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912381 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912385 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912390 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912395 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912399 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912403 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912409 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912414 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912419 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912423 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912427 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912432 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912436 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912441 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912446 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912450 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912454 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912459 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912463 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912469 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912474 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912478 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912482 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912487 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912493 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912498 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912503 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912507 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912512 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912516 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912520 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912525 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912529 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912534 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912539 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912544 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912549 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912553 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.912557 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913163 4813 flags.go:64] FLAG: --address="0.0.0.0" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913185 4813 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913195 4813 flags.go:64] FLAG: --anonymous-auth="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913203 4813 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913210 4813 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913216 4813 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913223 4813 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913231 4813 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913236 4813 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913241 4813 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913247 4813 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913252 4813 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913258 4813 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913264 4813 flags.go:64] FLAG: --cgroup-root="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913269 4813 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913274 4813 flags.go:64] FLAG: --client-ca-file="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913279 4813 flags.go:64] FLAG: --cloud-config="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913286 4813 flags.go:64] FLAG: --cloud-provider="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913292 4813 flags.go:64] FLAG: --cluster-dns="[]" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913301 4813 flags.go:64] FLAG: --cluster-domain="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913307 4813 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913312 4813 flags.go:64] FLAG: --config-dir="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913318 4813 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913324 4813 flags.go:64] FLAG: --container-log-max-files="5" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913331 4813 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913337 4813 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913342 4813 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913348 4813 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913353 4813 flags.go:64] FLAG: --contention-profiling="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913358 4813 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913363 4813 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913370 4813 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913375 4813 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913382 4813 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913387 4813 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913393 4813 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913398 4813 flags.go:64] FLAG: --enable-load-reader="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913403 4813 flags.go:64] FLAG: --enable-server="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913408 4813 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913414 4813 flags.go:64] FLAG: --event-burst="100" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913420 4813 flags.go:64] FLAG: --event-qps="50" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913425 4813 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913430 4813 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913435 4813 flags.go:64] FLAG: --eviction-hard="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913442 4813 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913447 4813 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913452 4813 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913457 4813 flags.go:64] FLAG: --eviction-soft="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913462 4813 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913470 4813 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913476 4813 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913481 4813 flags.go:64] FLAG: --experimental-mounter-path="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913487 4813 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913492 4813 flags.go:64] FLAG: --fail-swap-on="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913497 4813 flags.go:64] FLAG: --feature-gates="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913504 4813 flags.go:64] FLAG: --file-check-frequency="20s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913509 4813 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913721 4813 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913730 4813 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913735 4813 flags.go:64] FLAG: --healthz-port="10248" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913740 4813 flags.go:64] FLAG: --help="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913745 4813 flags.go:64] FLAG: --hostname-override="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913751 4813 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913756 4813 flags.go:64] FLAG: --http-check-frequency="20s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913761 4813 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913766 4813 flags.go:64] FLAG: --image-credential-provider-config="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913770 4813 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913777 4813 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913781 4813 flags.go:64] FLAG: --image-service-endpoint="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913786 4813 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913791 4813 flags.go:64] FLAG: --kube-api-burst="100" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913795 4813 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913801 4813 flags.go:64] FLAG: --kube-api-qps="50" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913805 4813 flags.go:64] FLAG: --kube-reserved="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913810 4813 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913815 4813 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913821 4813 flags.go:64] FLAG: --kubelet-cgroups="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913825 4813 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913830 4813 flags.go:64] FLAG: --lock-file="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913835 4813 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913840 4813 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913848 4813 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913857 4813 flags.go:64] FLAG: --log-json-split-stream="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913862 4813 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913867 4813 flags.go:64] FLAG: --log-text-split-stream="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913872 4813 flags.go:64] FLAG: --logging-format="text" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913877 4813 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913883 4813 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913888 4813 flags.go:64] FLAG: --manifest-url="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913892 4813 flags.go:64] FLAG: --manifest-url-header="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913899 4813 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913903 4813 flags.go:64] FLAG: --max-open-files="1000000" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913909 4813 flags.go:64] FLAG: --max-pods="110" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913913 4813 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913918 4813 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913923 4813 flags.go:64] FLAG: --memory-manager-policy="None" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913928 4813 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913933 4813 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913938 4813 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913944 4813 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913956 4813 flags.go:64] FLAG: --node-status-max-images="50" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913960 4813 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913965 4813 flags.go:64] FLAG: --oom-score-adj="-999" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913971 4813 flags.go:64] FLAG: --pod-cidr="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913976 4813 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913985 4813 flags.go:64] FLAG: --pod-manifest-path="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913989 4813 flags.go:64] FLAG: --pod-max-pids="-1" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913994 4813 flags.go:64] FLAG: --pods-per-core="0" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.913999 4813 flags.go:64] FLAG: --port="10250" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914004 4813 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914009 4813 flags.go:64] FLAG: --provider-id="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914013 4813 flags.go:64] FLAG: --qos-reserved="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914018 4813 flags.go:64] FLAG: --read-only-port="10255" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914023 4813 flags.go:64] FLAG: --register-node="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914029 4813 flags.go:64] FLAG: --register-schedulable="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914034 4813 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914043 4813 flags.go:64] FLAG: --registry-burst="10" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914048 4813 flags.go:64] FLAG: --registry-qps="5" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914053 4813 flags.go:64] FLAG: --reserved-cpus="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914058 4813 flags.go:64] FLAG: --reserved-memory="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914064 4813 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914093 4813 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914100 4813 flags.go:64] FLAG: --rotate-certificates="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914105 4813 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914110 4813 flags.go:64] FLAG: --runonce="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914114 4813 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914119 4813 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914124 4813 flags.go:64] FLAG: --seccomp-default="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914129 4813 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914134 4813 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914139 4813 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914145 4813 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914150 4813 flags.go:64] FLAG: --storage-driver-password="root" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914155 4813 flags.go:64] FLAG: --storage-driver-secure="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914160 4813 flags.go:64] FLAG: --storage-driver-table="stats" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914165 4813 flags.go:64] FLAG: --storage-driver-user="root" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914169 4813 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914174 4813 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914179 4813 flags.go:64] FLAG: --system-cgroups="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914184 4813 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914192 4813 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914197 4813 flags.go:64] FLAG: --tls-cert-file="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914201 4813 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914207 4813 flags.go:64] FLAG: --tls-min-version="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914243 4813 flags.go:64] FLAG: --tls-private-key-file="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914249 4813 flags.go:64] FLAG: --topology-manager-policy="none" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914254 4813 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914259 4813 flags.go:64] FLAG: --topology-manager-scope="container" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914264 4813 flags.go:64] FLAG: --v="2" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914271 4813 flags.go:64] FLAG: --version="false" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914278 4813 flags.go:64] FLAG: --vmodule="" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914284 4813 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.914288 4813 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914393 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914399 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914405 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914411 4813 feature_gate.go:330] unrecognized feature gate: Example Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914416 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914421 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914426 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914430 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914435 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914439 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914444 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914449 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914453 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914458 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914463 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914467 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914471 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914475 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914480 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914484 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914489 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914493 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.914573 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916558 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916603 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916609 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916614 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916618 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916624 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916628 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916632 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916636 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916643 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916648 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916653 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916657 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916670 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916675 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916680 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916684 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916688 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916930 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916937 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916942 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916950 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916959 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916965 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916970 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916975 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916980 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916985 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.916995 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917000 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917005 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917012 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917021 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917376 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917393 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917397 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917402 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917406 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917409 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917415 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917418 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917423 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917427 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917431 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917435 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917438 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917442 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.917446 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.917551 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.928847 4813 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.928895 4813 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.928986 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.928997 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929002 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929007 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929014 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929018 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929023 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929027 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929031 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929037 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929044 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929048 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929053 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929059 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929064 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929087 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929092 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929096 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929101 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929106 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929111 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929116 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929120 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929127 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929134 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929141 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929147 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929151 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929156 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929162 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929167 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929172 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929178 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929184 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929189 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929194 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929199 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929204 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929208 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929213 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929218 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929224 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929230 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929235 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929241 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929245 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929249 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929255 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929261 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929265 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929270 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929274 4813 feature_gate.go:330] unrecognized feature gate: Example Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929279 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929284 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929288 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929292 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929296 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929300 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929304 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929310 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929316 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929321 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929326 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929330 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929335 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929340 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929344 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929349 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929353 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929358 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929362 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.929371 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929559 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929571 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929578 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929583 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929589 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929595 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929601 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929607 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929613 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929618 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929622 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929627 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929633 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929637 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929642 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929647 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929651 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929656 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929660 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929666 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929671 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929675 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929681 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929689 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929694 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929699 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929708 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929714 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929720 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929724 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929729 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929733 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929738 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929743 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929747 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929753 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929760 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929765 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929770 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929775 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929780 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929785 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929790 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929795 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929800 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929804 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929809 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929814 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929818 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929823 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929828 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929833 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929838 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929843 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929847 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929851 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929856 4813 feature_gate.go:330] unrecognized feature gate: Example Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929860 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929866 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929873 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929878 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929882 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929887 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929891 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929896 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929901 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929905 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929910 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929914 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929919 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.929923 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.929929 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.930222 4813 server.go:940] "Client rotation is on, will bootstrap in background" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.933714 4813 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.933834 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.934452 4813 server.go:997] "Starting client certificate rotation" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.934473 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.934755 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-01 22:15:52.079128224 +0000 UTC Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.934995 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 732h7m56.144138891s for next certificate rotation Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.940139 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.942138 4813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.951582 4813 log.go:25] "Validated CRI v1 runtime API" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.965351 4813 log.go:25] "Validated CRI v1 image API" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.967089 4813 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.970466 4813 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-02-10-03-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.970514 4813 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.989897 4813 manager.go:217] Machine: {Timestamp:2025-12-02 10:07:55.987707974 +0000 UTC m=+0.182882296 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:fbb40b6c-9f6a-4fae-a398-84ef5378393c BootID:634e706a-26e4-4e25-9891-c6df4b41c61e Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:78:26:0e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:78:26:0e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:98:23:ad Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:af:9e:23 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:78:81:1b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:24:19:8e Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:ba:8d:aa Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:68:f7:b4:35:f2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4a:6c:d9:a6:19:f0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.990248 4813 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.990511 4813 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.990957 4813 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.991186 4813 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.991223 4813 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.991502 4813 topology_manager.go:138] "Creating topology manager with none policy" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.991516 4813 container_manager_linux.go:303] "Creating device plugin manager" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.991765 4813 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.991814 4813 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.992328 4813 state_mem.go:36] "Initialized new in-memory state store" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.992956 4813 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.993917 4813 kubelet.go:418] "Attempting to sync node with API server" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.993945 4813 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.993975 4813 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.993991 4813 kubelet.go:324] "Adding apiserver pod source" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.994006 4813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.996830 4813 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.997224 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998053 4813 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998671 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998699 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998715 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998724 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998738 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998747 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998756 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998769 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998780 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998788 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998871 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.998882 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.999166 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.999303 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 02 10:07:55 crc kubenswrapper[4813]: E1202 10:07:55.999307 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:07:55 crc kubenswrapper[4813]: W1202 10:07:55.999354 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:55 crc kubenswrapper[4813]: E1202 10:07:55.999493 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:07:55 crc kubenswrapper[4813]: I1202 10:07:55.999864 4813 server.go:1280] "Started kubelet" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.000112 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.000180 4813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.000203 4813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.000926 4813 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 02 10:07:56 crc systemd[1]: Started Kubernetes Kubelet. Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.001360 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d5e15a1bb41df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 10:07:55.999830495 +0000 UTC m=+0.195004807,LastTimestamp:2025-12-02 10:07:55.999830495 +0000 UTC m=+0.195004807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.007722 4813 server.go:460] "Adding debug handlers to kubelet server" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.008209 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.008348 4813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.008388 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:35:45.184766419 +0000 UTC Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.008674 4813 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.008659 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.008696 4813 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.008713 4813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 02 10:07:56 crc kubenswrapper[4813]: W1202 10:07:56.010145 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.010300 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.010349 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.010549 4813 factory.go:55] Registering systemd factory Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.010589 4813 factory.go:221] Registration of the systemd container factory successfully Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.012185 4813 factory.go:153] Registering CRI-O factory Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.012284 4813 factory.go:221] Registration of the crio container factory successfully Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.012417 4813 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.012501 4813 factory.go:103] Registering Raw factory Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.012570 4813 manager.go:1196] Started watching for new ooms in manager Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.013319 4813 manager.go:319] Starting recovery of all containers Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.027605 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.027772 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.027820 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.027850 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.027890 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.027920 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.027951 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029514 4813 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029606 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029643 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029673 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029690 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029709 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029725 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029748 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029763 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029779 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029802 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029816 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029836 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029851 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029865 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029889 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029908 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029932 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029952 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029972 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.029993 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030012 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030024 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030042 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030057 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030096 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030113 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030128 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030145 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030158 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030172 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030191 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030207 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030227 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030241 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030253 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030271 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030285 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.030302 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031060 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031192 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031214 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031243 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031257 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031279 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031294 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031329 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031346 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031368 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031389 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031403 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031419 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031433 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031450 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031467 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031485 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031502 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031515 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031529 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031548 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031561 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031577 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031590 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.031610 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.033679 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.033812 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.033902 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.033985 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034135 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034233 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034320 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034400 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034480 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034612 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034702 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034785 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034862 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.034942 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035029 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035209 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035309 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035395 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035477 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035557 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035645 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035739 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035831 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.035918 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036001 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036102 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036208 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036294 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036375 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036457 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036541 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036624 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036709 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036792 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036889 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.036977 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037061 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037183 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037272 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037356 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037447 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037532 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037634 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037713 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037797 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037879 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037954 4813 manager.go:324] Recovery completed Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.037963 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.038923 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039026 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039125 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039212 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039310 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039403 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039486 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039563 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039635 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039712 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039782 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039860 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.039928 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040012 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040111 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040201 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040290 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040367 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040447 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040528 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040608 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040691 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040769 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040842 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.040922 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041014 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041120 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041202 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041285 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041362 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041436 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041525 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041608 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041685 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041770 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.041853 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.043015 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.043452 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.043535 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.043614 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.043689 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.043769 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.043858 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.043938 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044016 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044166 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044250 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044326 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044520 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044600 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044704 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044782 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044864 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.044948 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045028 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045124 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045208 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045288 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045379 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045458 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045530 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045601 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045676 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045751 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045823 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.045899 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.046948 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047052 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047239 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047329 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047415 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047495 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047571 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047656 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047756 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047869 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.047962 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048050 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048158 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048244 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048322 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048399 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048477 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048551 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048627 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048718 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048798 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048877 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.048958 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.049040 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.049150 4813 reconstruct.go:97] "Volume reconstruction finished" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.049231 4813 reconciler.go:26] "Reconciler: start to sync state" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.050782 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.052425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.052486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.052500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.053526 4813 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.053542 4813 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.053562 4813 state_mem.go:36] "Initialized new in-memory state store" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.062062 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.062373 4813 policy_none.go:49] "None policy: Start" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.063458 4813 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.063494 4813 state_mem.go:35] "Initializing new in-memory state store" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.066489 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.066551 4813 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.066582 4813 kubelet.go:2335] "Starting kubelet main sync loop" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.066666 4813 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 02 10:07:56 crc kubenswrapper[4813]: W1202 10:07:56.067690 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.067772 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.108909 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.127381 4813 manager.go:334] "Starting Device Plugin manager" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.127451 4813 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.127471 4813 server.go:79] "Starting device plugin registration server" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.128101 4813 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.128122 4813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.128294 4813 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.128409 4813 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.128419 4813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.135684 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.167791 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.167890 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.168987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.169023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.169037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.169178 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.169325 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.169374 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170281 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170564 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.170677 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.171196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.171243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.171257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.171420 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.171720 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.171769 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.172587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.172647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.172665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.172610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.172826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.172860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.173065 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.173269 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.173322 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.173275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.173395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.173405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.174466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.174488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.174502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.174518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.174541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.174552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.174697 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.174740 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.175616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.175651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.175661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.211668 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.228509 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.230225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.230273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.230286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.230317 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.231231 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.250912 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.250993 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251018 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251160 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251263 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251303 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251321 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251390 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251473 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251499 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.251523 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.352799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.352874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.352908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.352932 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.352957 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.352978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.352999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353098 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353155 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353201 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353260 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353288 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353403 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353427 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353406 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353507 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353518 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.353696 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.431623 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.433128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.433187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.433204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.433237 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.433881 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.506797 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.507936 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.515219 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.527298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.532393 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:07:56 crc kubenswrapper[4813]: W1202 10:07:56.533085 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-137a0c22440198af69ac025a9c10267cd2fdd952c29ce51012256ed0b4a50790 WatchSource:0}: Error finding container 137a0c22440198af69ac025a9c10267cd2fdd952c29ce51012256ed0b4a50790: Status 404 returned error can't find the container with id 137a0c22440198af69ac025a9c10267cd2fdd952c29ce51012256ed0b4a50790 Dec 02 10:07:56 crc kubenswrapper[4813]: W1202 10:07:56.534262 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-32276d973fa8912faec3e57c6c581c1b48af0ad29085d9b90cbb47274eb307f7 WatchSource:0}: Error finding container 32276d973fa8912faec3e57c6c581c1b48af0ad29085d9b90cbb47274eb307f7: Status 404 returned error can't find the container with id 32276d973fa8912faec3e57c6c581c1b48af0ad29085d9b90cbb47274eb307f7 Dec 02 10:07:56 crc kubenswrapper[4813]: W1202 10:07:56.538385 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6a0813aac48b0cc6f0fbd75a7839331464a2d33743594c659e4702787bbb306e WatchSource:0}: Error finding container 6a0813aac48b0cc6f0fbd75a7839331464a2d33743594c659e4702787bbb306e: Status 404 returned error can't find the container with id 6a0813aac48b0cc6f0fbd75a7839331464a2d33743594c659e4702787bbb306e Dec 02 10:07:56 crc kubenswrapper[4813]: W1202 10:07:56.543417 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-94a52c51edeff75585809c4a9e13434f243969259ba258dd47f060ffc50c53ed WatchSource:0}: Error finding container 94a52c51edeff75585809c4a9e13434f243969259ba258dd47f060ffc50c53ed: Status 404 returned error can't find the container with id 94a52c51edeff75585809c4a9e13434f243969259ba258dd47f060ffc50c53ed Dec 02 10:07:56 crc kubenswrapper[4813]: W1202 10:07:56.546568 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-bdc2feff4094056b1b34c4327b9a3c5c107bfbfe7d664c10925ce04358a724d9 WatchSource:0}: Error finding container bdc2feff4094056b1b34c4327b9a3c5c107bfbfe7d664c10925ce04358a724d9: Status 404 returned error can't find the container with id bdc2feff4094056b1b34c4327b9a3c5c107bfbfe7d664c10925ce04358a724d9 Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.613109 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.835049 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.836425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.836482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.836500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:56 crc kubenswrapper[4813]: I1202 10:07:56.836546 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:07:56 crc kubenswrapper[4813]: E1202 10:07:56.837140 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.001200 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.009185 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:23:57.580464292 +0000 UTC Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.009291 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h16m0.571178266s for next certificate rotation Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.073196 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa" exitCode=0 Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.073294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.073439 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"94a52c51edeff75585809c4a9e13434f243969259ba258dd47f060ffc50c53ed"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.073596 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.075089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.075128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.075142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.075581 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2734586bf848146bffba63504a7006d5e48e6bc1d4fc0e12bb1c29cfeb511590" exitCode=0 Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.075644 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2734586bf848146bffba63504a7006d5e48e6bc1d4fc0e12bb1c29cfeb511590"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.075665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6a0813aac48b0cc6f0fbd75a7839331464a2d33743594c659e4702787bbb306e"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.075769 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.076551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.076587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.076603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.077290 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.078222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.078260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.078272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.079599 4813 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803" exitCode=0 Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.079655 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.079717 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"32276d973fa8912faec3e57c6c581c1b48af0ad29085d9b90cbb47274eb307f7"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.079839 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.081111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.081200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.081218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.082252 4813 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e" exitCode=0 Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.082331 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.082368 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"137a0c22440198af69ac025a9c10267cd2fdd952c29ce51012256ed0b4a50790"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.082460 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.083397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.083450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.083466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.085355 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659"} Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.085404 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bdc2feff4094056b1b34c4327b9a3c5c107bfbfe7d664c10925ce04358a724d9"} Dec 02 10:07:57 crc kubenswrapper[4813]: W1202 10:07:57.224043 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:57 crc kubenswrapper[4813]: E1202 10:07:57.224190 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:07:57 crc kubenswrapper[4813]: W1202 10:07:57.301778 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:57 crc kubenswrapper[4813]: E1202 10:07:57.301897 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:07:57 crc kubenswrapper[4813]: E1202 10:07:57.416894 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Dec 02 10:07:57 crc kubenswrapper[4813]: W1202 10:07:57.500177 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:57 crc kubenswrapper[4813]: E1202 10:07:57.500318 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:07:57 crc kubenswrapper[4813]: W1202 10:07:57.530999 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 02 10:07:57 crc kubenswrapper[4813]: E1202 10:07:57.531153 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.637678 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.640450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.640540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.640555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:57 crc kubenswrapper[4813]: I1202 10:07:57.640617 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.090169 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.090222 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.090233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.090260 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.091785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.091830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.091845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.093666 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.093710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.093725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.093736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.096190 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b7dfa7e462e1d143d02e7b02a9148c3ddac655871de812a6d5ae1d720879ff95" exitCode=0 Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.096315 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b7dfa7e462e1d143d02e7b02a9148c3ddac655871de812a6d5ae1d720879ff95"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.096542 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.097596 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.097638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.097650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.098157 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.098271 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.098996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.099022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.099033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.101620 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.101663 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.101678 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f"} Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.101806 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.102611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.102645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:58 crc kubenswrapper[4813]: I1202 10:07:58.102656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.111992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"294e1f9c38936e2d36d31e40633ded519ddd6228487c596eb7f24779c1867bc2"} Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.112015 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="294e1f9c38936e2d36d31e40633ded519ddd6228487c596eb7f24779c1867bc2" exitCode=0 Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.113129 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.115157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.115200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.115221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.124194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce"} Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.124337 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.124344 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.124234 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.124646 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.126694 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:07:59 crc kubenswrapper[4813]: I1202 10:07:59.162844 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a0f3f5152821d03fc1787718207b3314baa0e23cb28b7bcc01c2c047e4e03be9"} Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133580 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b33710d1c57d153914f69e4dd2ad48be6768ec971cf3364740d331cbd0c934f6"} Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133591 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133639 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133598 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a9bb863bc54c186860f87789a3da45432da2e9fe69d89f111f288b46a72567b0"} Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133783 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133805 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c477580e13917b8042147823751c8883b6a4405aa944e3b6994fc2bd1935658c"} Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133848 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.133865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ff032df4aad87bc199b2f53df409dbaf3fa8a7fa1f48d0f8dd314f8420d0292"} Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.134965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.135000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.135010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.135148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.135170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.135180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.135253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.135296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.135310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.590122 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.590343 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.591648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.591694 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:00 crc kubenswrapper[4813]: I1202 10:08:00.591707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.133988 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.136244 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.136320 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.138157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.138205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.138227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.138281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.138329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:01 crc kubenswrapper[4813]: I1202 10:08:01.138346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.060437 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.060703 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.062579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.062633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.062681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.140246 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.141599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.141688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:02 crc kubenswrapper[4813]: I1202 10:08:02.141716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.546513 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.546763 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.548763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.548836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.548847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.710802 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.711012 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.712829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.712880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:03 crc kubenswrapper[4813]: I1202 10:08:03.712892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.121753 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.147021 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.148833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.148896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.148911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.776456 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.776907 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.778536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.778579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:04 crc kubenswrapper[4813]: I1202 10:08:04.778596 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.124472 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.124693 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.127032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.127120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.127135 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.132297 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.149820 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.150879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.150924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:05 crc kubenswrapper[4813]: I1202 10:08:05.150938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:06 crc kubenswrapper[4813]: E1202 10:08:06.135900 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 10:08:06 crc kubenswrapper[4813]: I1202 10:08:06.711844 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 10:08:06 crc kubenswrapper[4813]: I1202 10:08:06.711937 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:08:07 crc kubenswrapper[4813]: E1202 10:08:07.642234 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 02 10:08:08 crc kubenswrapper[4813]: I1202 10:08:08.001996 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 02 10:08:08 crc kubenswrapper[4813]: I1202 10:08:08.270795 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 02 10:08:08 crc kubenswrapper[4813]: I1202 10:08:08.270908 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 02 10:08:08 crc kubenswrapper[4813]: I1202 10:08:08.832185 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 02 10:08:08 crc kubenswrapper[4813]: I1202 10:08:08.832278 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 10:08:09 crc kubenswrapper[4813]: I1202 10:08:09.128147 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]log ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]etcd ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/generic-apiserver-start-informers ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/priority-and-fairness-filter ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-apiextensions-informers ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-apiextensions-controllers ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/crd-informer-synced ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-system-namespaces-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 02 10:08:09 crc kubenswrapper[4813]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 02 10:08:09 crc kubenswrapper[4813]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/bootstrap-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/start-kube-aggregator-informers ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/apiservice-registration-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/apiservice-discovery-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]autoregister-completion ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/apiservice-openapi-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 02 10:08:09 crc kubenswrapper[4813]: livez check failed Dec 02 10:08:09 crc kubenswrapper[4813]: I1202 10:08:09.128238 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:08:09 crc kubenswrapper[4813]: I1202 10:08:09.242919 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:09 crc kubenswrapper[4813]: I1202 10:08:09.244294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:09 crc kubenswrapper[4813]: I1202 10:08:09.244328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:09 crc kubenswrapper[4813]: I1202 10:08:09.244337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:09 crc kubenswrapper[4813]: I1202 10:08:09.244361 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 10:08:11 crc kubenswrapper[4813]: I1202 10:08:11.171067 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 02 10:08:11 crc kubenswrapper[4813]: I1202 10:08:11.171882 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:11 crc kubenswrapper[4813]: I1202 10:08:11.173607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:11 crc kubenswrapper[4813]: I1202 10:08:11.173666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:11 crc kubenswrapper[4813]: I1202 10:08:11.173684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:11 crc kubenswrapper[4813]: I1202 10:08:11.188897 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.065721 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.065903 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.067262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.067318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.067330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.171860 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.173129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.173177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:12 crc kubenswrapper[4813]: I1202 10:08:12.173190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:13 crc kubenswrapper[4813]: E1202 10:08:13.825127 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.828973 4813 trace.go:236] Trace[1282398122]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 10:07:59.428) (total time: 14399ms): Dec 02 10:08:13 crc kubenswrapper[4813]: Trace[1282398122]: ---"Objects listed" error: 14399ms (10:08:13.828) Dec 02 10:08:13 crc kubenswrapper[4813]: Trace[1282398122]: [14.39978972s] [14.39978972s] END Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.829007 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.829596 4813 trace.go:236] Trace[1843313799]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 10:07:59.440) (total time: 14388ms): Dec 02 10:08:13 crc kubenswrapper[4813]: Trace[1843313799]: ---"Objects listed" error: 14388ms (10:08:13.829) Dec 02 10:08:13 crc kubenswrapper[4813]: Trace[1843313799]: [14.388821194s] [14.388821194s] END Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.829618 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.830173 4813 trace.go:236] Trace[81738228]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 10:08:00.513) (total time: 13317ms): Dec 02 10:08:13 crc kubenswrapper[4813]: Trace[81738228]: ---"Objects listed" error: 13316ms (10:08:13.829) Dec 02 10:08:13 crc kubenswrapper[4813]: Trace[81738228]: [13.317042s] [13.317042s] END Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.830214 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.830670 4813 trace.go:236] Trace[624958886]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 10:07:59.344) (total time: 14486ms): Dec 02 10:08:13 crc kubenswrapper[4813]: Trace[624958886]: ---"Objects listed" error: 14486ms (10:08:13.830) Dec 02 10:08:13 crc kubenswrapper[4813]: Trace[624958886]: [14.48647948s] [14.48647948s] END Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.830830 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.830954 4813 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.932388 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:13 crc kubenswrapper[4813]: I1202 10:08:13.939568 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.003922 4813 apiserver.go:52] "Watching apiserver" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.007578 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.007877 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-97mdk","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/machine-config-daemon-4p89g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/iptables-alerter-4ln5h"] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.008365 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.008423 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.008395 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.008515 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.008569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.008809 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.008920 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.008965 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.009131 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.009164 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-97mdk" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.009370 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.010933 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.012305 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.023090 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.023530 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024137 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024204 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024280 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024356 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024482 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024576 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024584 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024846 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024672 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.025008 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024730 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.024778 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.028971 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.033945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034010 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8285r\" (UniqueName: \"kubernetes.io/projected/db121737-190f-4b43-9d79-e96e2dd76080-kube-api-access-8285r\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034093 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034118 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034139 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034159 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034209 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db121737-190f-4b43-9d79-e96e2dd76080-mcd-auth-proxy-config\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034256 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3fbb40e6-955d-4ba1-b48f-e535ed20494d-hosts-file\") pod \"node-resolver-97mdk\" (UID: \"3fbb40e6-955d-4ba1-b48f-e535ed20494d\") " pod="openshift-dns/node-resolver-97mdk" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034302 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2g6\" (UniqueName: \"kubernetes.io/projected/3fbb40e6-955d-4ba1-b48f-e535ed20494d-kube-api-access-rb2g6\") pod \"node-resolver-97mdk\" (UID: \"3fbb40e6-955d-4ba1-b48f-e535ed20494d\") " pod="openshift-dns/node-resolver-97mdk" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db121737-190f-4b43-9d79-e96e2dd76080-proxy-tls\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034374 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034402 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034451 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.034474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/db121737-190f-4b43-9d79-e96e2dd76080-rootfs\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.034668 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.034738 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:14.534716573 +0000 UTC m=+18.729890875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.035978 4813 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.037919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.038226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.038239 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.038472 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:14.538453408 +0000 UTC m=+18.733627710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.042137 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.048333 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.049752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.058641 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.058697 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.058715 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.058785 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:14.558764492 +0000 UTC m=+18.753938794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.060530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.063350 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.064005 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.064043 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.064058 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.064148 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:14.564120627 +0000 UTC m=+18.759294939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.066940 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.067705 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.069534 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.081565 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.108687 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.109419 4813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.125906 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.129125 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.134468 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135546 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135587 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135607 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135626 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135647 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135667 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135705 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135736 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135788 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135806 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135849 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135870 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135885 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.136630 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.136601 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.136676 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.136729 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.136794 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.136896 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39462->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.136944 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.136973 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39462->192.168.126.11:17697: read: connection reset by peer" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137019 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137116 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137166 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137366 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137369 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137577 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.135902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137697 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137737 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137747 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137764 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137815 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137851 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137888 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137913 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137946 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.137976 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.138005 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.138000 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.138024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.138027 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.138960 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.138750 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.138868 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.138958 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139012 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139334 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139296 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139948 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139951 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.139969 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140046 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140098 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140118 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140143 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140151 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140168 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140196 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140223 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140268 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140296 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140332 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140366 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140393 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140418 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140469 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140468 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140728 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140947 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.141207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.141338 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:08:14.641317828 +0000 UTC m=+18.836492130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.141382 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.141713 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.141747 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.141951 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.142132 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.142200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.142337 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.142734 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.142934 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.142795 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.143047 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.143474 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.144431 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.144558 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.144594 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.144737 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.140446 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.145481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.145549 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.145649 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.145702 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.145950 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.145976 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.145997 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146022 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146047 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146091 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146113 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146306 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.145888 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146353 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146230 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146253 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146192 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146305 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146405 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146479 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146501 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146567 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146594 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146651 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146675 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146780 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146826 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146843 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146899 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.146935 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147027 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147035 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147065 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147130 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147138 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147164 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147192 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147257 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147256 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147242 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147295 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147325 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147357 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147391 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147419 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147449 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147480 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147507 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147539 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147563 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147592 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147622 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147648 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147678 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147704 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147815 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147854 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147891 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148651 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148793 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148834 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148900 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148965 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149027 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149518 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149563 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149594 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149622 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149651 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149728 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149780 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149825 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149844 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149875 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149897 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149914 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149940 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149960 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149978 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.149999 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150018 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150037 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150061 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150105 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150125 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150142 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150161 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150179 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150196 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150237 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150268 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150287 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150310 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150336 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150354 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150372 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150391 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150408 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150428 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150446 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150464 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150482 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150502 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150537 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150592 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150655 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150676 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150694 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150712 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150758 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150796 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150822 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150842 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150884 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150920 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150962 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150983 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151002 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151064 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151112 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151199 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151263 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151296 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151356 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151393 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151447 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151503 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151591 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151618 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151639 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151684 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151785 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151832 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151856 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151873 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151939 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151999 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152109 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/db121737-190f-4b43-9d79-e96e2dd76080-rootfs\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8285r\" (UniqueName: \"kubernetes.io/projected/db121737-190f-4b43-9d79-e96e2dd76080-kube-api-access-8285r\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152399 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db121737-190f-4b43-9d79-e96e2dd76080-mcd-auth-proxy-config\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3fbb40e6-955d-4ba1-b48f-e535ed20494d-hosts-file\") pod \"node-resolver-97mdk\" (UID: \"3fbb40e6-955d-4ba1-b48f-e535ed20494d\") " pod="openshift-dns/node-resolver-97mdk" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb2g6\" (UniqueName: \"kubernetes.io/projected/3fbb40e6-955d-4ba1-b48f-e535ed20494d-kube-api-access-rb2g6\") pod \"node-resolver-97mdk\" (UID: \"3fbb40e6-955d-4ba1-b48f-e535ed20494d\") " pod="openshift-dns/node-resolver-97mdk" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152888 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db121737-190f-4b43-9d79-e96e2dd76080-proxy-tls\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147562 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147601 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.147732 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148002 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148303 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.148823 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.150823 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151791 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.151834 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152201 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152784 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.152958 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.153853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.154359 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.154582 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.154624 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.154650 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.154669 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.154685 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.154705 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.154757 4813 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.155655 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.155920 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.156189 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.156636 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157000 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157056 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157092 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157109 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157124 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157136 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157149 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157163 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157176 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157189 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157202 4813 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157215 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.156656 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157228 4813 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.156909 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157243 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.156935 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157141 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157281 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157301 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157319 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157333 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157347 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157362 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157374 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157388 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157401 4813 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157414 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157428 4813 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157442 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157455 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157470 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157482 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157496 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157510 4813 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157522 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157535 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157548 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157561 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157573 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157586 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157599 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157611 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157625 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157639 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157652 4813 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157664 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157678 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157690 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157702 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157715 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157727 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157740 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157752 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157799 4813 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157814 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157826 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157840 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157853 4813 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157866 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157880 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157893 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157905 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157917 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157932 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157945 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157962 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157974 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157987 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157999 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.158011 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157529 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157566 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157564 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157792 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157111 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.158042 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.158158 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.157290 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.158429 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.158553 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.159470 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.159546 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.159557 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.159665 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.159674 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.160213 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.160223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.160228 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.160500 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.160691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161032 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161381 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161733 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161817 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.161953 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.162086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3fbb40e6-955d-4ba1-b48f-e535ed20494d-hosts-file\") pod \"node-resolver-97mdk\" (UID: \"3fbb40e6-955d-4ba1-b48f-e535ed20494d\") " pod="openshift-dns/node-resolver-97mdk" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.162230 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.162290 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.162544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.162817 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.163026 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.163359 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.163425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.163689 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.163795 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.164219 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.165340 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.166037 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.166086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db121737-190f-4b43-9d79-e96e2dd76080-proxy-tls\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.166913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db121737-190f-4b43-9d79-e96e2dd76080-mcd-auth-proxy-config\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.167003 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/db121737-190f-4b43-9d79-e96e2dd76080-rootfs\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.168583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.169299 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.169852 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.170148 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.170575 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.170565 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.170824 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.171220 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.171773 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.171925 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.171957 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.172353 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.172481 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.172523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.172819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.172850 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.172891 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.174162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.174560 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.175314 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.175678 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.176310 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.176356 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.176589 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.177172 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.177460 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.177520 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.177753 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.178485 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.178726 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.178724 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.179139 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.179734 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.180027 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.179629 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.181197 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.184228 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.184523 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.185181 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.185942 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.186663 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.186848 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8285r\" (UniqueName: \"kubernetes.io/projected/db121737-190f-4b43-9d79-e96e2dd76080-kube-api-access-8285r\") pod \"machine-config-daemon-4p89g\" (UID: \"db121737-190f-4b43-9d79-e96e2dd76080\") " pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.188924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.189130 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.190962 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.191789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb2g6\" (UniqueName: \"kubernetes.io/projected/3fbb40e6-955d-4ba1-b48f-e535ed20494d-kube-api-access-rb2g6\") pod \"node-resolver-97mdk\" (UID: \"3fbb40e6-955d-4ba1-b48f-e535ed20494d\") " pod="openshift-dns/node-resolver-97mdk" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.192610 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.195174 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.195430 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.196560 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.196767 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.196982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.197540 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.197681 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.197711 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.197828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.198242 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.199354 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.199530 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.199701 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.199789 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.199858 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200060 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200199 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200465 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200491 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200694 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200930 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200989 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200961 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.200255 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.201254 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.202325 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.202435 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.202528 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce" exitCode=255 Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.202930 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.205632 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.208157 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.210700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.211099 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.212009 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.213187 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.219679 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.222383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.234113 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.234799 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.245380 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.256175 4813 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.256313 4813 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258213 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258423 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258464 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258639 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258656 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258669 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258681 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258696 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258707 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258718 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258755 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258767 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258775 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258784 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258826 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258839 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258852 4813 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258862 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258871 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258880 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258889 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258897 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258906 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258915 4813 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258925 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258935 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258944 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258953 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258961 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258969 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258978 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258986 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.258995 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259004 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259012 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259020 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259029 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259038 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259047 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259055 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259063 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259097 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259107 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259117 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259125 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259133 4813 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259142 4813 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259150 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259158 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259167 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259174 4813 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259182 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259190 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259198 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259207 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259216 4813 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259226 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259234 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259243 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259251 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259258 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259267 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259276 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259285 4813 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259293 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259302 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259313 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259324 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259333 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259341 4813 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259350 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259358 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259366 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259374 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259382 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259390 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259398 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259406 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259419 4813 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259428 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259436 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259444 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259453 4813 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259461 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259470 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259478 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259486 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259494 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259504 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259512 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259521 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259530 4813 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259538 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259546 4813 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259555 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259562 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259571 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259579 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259587 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259595 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259603 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259612 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259620 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259631 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259640 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259650 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259658 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259665 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259673 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259682 4813 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259692 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259702 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259709 4813 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259717 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259726 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259734 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259743 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259751 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259759 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259767 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259775 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.259784 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.268022 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.272457 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.278566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.278616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.278626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.278643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.278655 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.282322 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.289011 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.291066 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.295527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.295589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.295599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.295617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.295629 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.303105 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.306197 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.311264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.311306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.311316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.311333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.311383 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.312931 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.320862 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.324306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.324340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.324352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.324369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.324380 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.333341 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.333517 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.333665 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.336399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.336510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.336604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.336699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.336779 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: W1202 10:08:14.346679 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-39c9019a64155f965831a4002c13af4d9d55b1769ed7105d96d7cfc0c241195d WatchSource:0}: Error finding container 39c9019a64155f965831a4002c13af4d9d55b1769ed7105d96d7cfc0c241195d: Status 404 returned error can't find the container with id 39c9019a64155f965831a4002c13af4d9d55b1769ed7105d96d7cfc0c241195d Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.350518 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.356247 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-x7cgx"] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.356919 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.357798 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jj7j"] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.363194 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.363387 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.363615 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.363898 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-x4ggp"] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.364021 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.364974 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.365286 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.367238 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.367883 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.367983 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.368127 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.368180 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.368346 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.368381 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.371186 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.371319 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.371319 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.376411 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.383612 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.395494 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.396246 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-97mdk" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.405194 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.412817 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.426620 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.439906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.439956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.439970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.440025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.440039 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.442331 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.456434 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461646 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-etc-kubernetes\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461699 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-var-lib-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461730 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-script-lib\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461753 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-conf-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-env-overrides\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461801 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-cni-bin\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-multus-certs\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-systemd\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461871 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461895 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-cnibin\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-socket-dir-parent\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461945 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-netns\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-netd\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.461991 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-cni-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-os-release\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-etc-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462062 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-bin\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-log-socket\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462140 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-k8s-cni-cncf-io\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462164 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-kubelet\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462190 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-system-cni-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462245 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxq2g\" (UniqueName: \"kubernetes.io/projected/13fee0e7-46f3-4e78-ac37-0764b073f270-kube-api-access-wxq2g\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462292 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-systemd-units\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462326 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-ovn\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-node-log\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462381 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mllp7\" (UniqueName: \"kubernetes.io/projected/3551771a-22ef-4f85-ad6b-fa4033a3f90f-kube-api-access-mllp7\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462515 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-os-release\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462563 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-daemon-config\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-config\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462631 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-hostroot\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462687 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-kubelet\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovn-node-metrics-cert\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462770 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/13fee0e7-46f3-4e78-ac37-0764b073f270-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462797 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-cni-multus\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462818 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6vrs\" (UniqueName: \"kubernetes.io/projected/30b516bc-ab92-49fb-8f3b-431cf0ef3164-kube-api-access-q6vrs\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-slash\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-system-cni-dir\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462890 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-netns\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.462981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/13fee0e7-46f3-4e78-ac37-0764b073f270-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.463038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/30b516bc-ab92-49fb-8f3b-431cf0ef3164-cni-binary-copy\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.463061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-cnibin\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.469993 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.480053 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.490322 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.504787 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.517353 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.536722 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.554401 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.554728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.554747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.554755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.554769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.554779 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.563886 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-cnibin\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.563925 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.563954 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/30b516bc-ab92-49fb-8f3b-431cf0ef3164-cni-binary-copy\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.563973 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-script-lib\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.563993 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-conf-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564007 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-etc-kubernetes\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564023 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-var-lib-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564254 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-env-overrides\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-multus-certs\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564281 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-systemd\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564295 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564322 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-cni-bin\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-netns\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-netd\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564370 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-cni-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564383 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-cnibin\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564399 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-socket-dir-parent\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564414 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-bin\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564431 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564445 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-os-release\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-etc-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-log-socket\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564490 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-system-cni-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-k8s-cni-cncf-io\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-kubelet\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564548 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxq2g\" (UniqueName: \"kubernetes.io/projected/13fee0e7-46f3-4e78-ac37-0764b073f270-kube-api-access-wxq2g\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564582 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-systemd-units\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-ovn\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564630 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-node-log\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564662 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mllp7\" (UniqueName: \"kubernetes.io/projected/3551771a-22ef-4f85-ad6b-fa4033a3f90f-kube-api-access-mllp7\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564677 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-os-release\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564696 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-daemon-config\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564737 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-config\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovn-node-metrics-cert\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/13fee0e7-46f3-4e78-ac37-0764b073f270-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-hostroot\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564824 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-kubelet\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6vrs\" (UniqueName: \"kubernetes.io/projected/30b516bc-ab92-49fb-8f3b-431cf0ef3164-kube-api-access-q6vrs\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-slash\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564871 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-system-cni-dir\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564886 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564918 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-cni-multus\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/13fee0e7-46f3-4e78-ac37-0764b073f270-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.564950 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-netns\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565000 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-netns\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565033 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-conf-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-script-lib\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565056 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-etc-kubernetes\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-var-lib-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-cnibin\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.565219 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.565241 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.565253 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.565294 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:15.565279422 +0000 UTC m=+19.760453724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-env-overrides\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565471 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-multus-certs\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565493 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-systemd\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565536 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-cni-bin\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565556 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-netns\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565578 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-netd\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565623 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-cni-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565671 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-cnibin\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-socket-dir-parent\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565721 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-bin\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.565750 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.565772 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:15.565765017 +0000 UTC m=+19.760939319 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565814 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-os-release\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565821 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/30b516bc-ab92-49fb-8f3b-431cf0ef3164-cni-binary-copy\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-etc-openvswitch\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-log-socket\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565887 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-system-cni-dir\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565909 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-run-k8s-cni-cncf-io\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.565936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-kubelet\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.566561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/13fee0e7-46f3-4e78-ac37-0764b073f270-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.566593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-hostroot\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.566615 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-kubelet\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.566753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-slash\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.566777 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-system-cni-dir\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567097 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.567150 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.567184 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:15.567176351 +0000 UTC m=+19.762350653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567208 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/30b516bc-ab92-49fb-8f3b-431cf0ef3164-host-var-lib-cni-multus\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567580 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/13fee0e7-46f3-4e78-ac37-0764b073f270-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567645 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-node-log\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.567695 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.567715 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.567724 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.567747 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:15.567739958 +0000 UTC m=+19.762914260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567771 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567796 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-systemd-units\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567818 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-ovn\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567874 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/13fee0e7-46f3-4e78-ac37-0764b073f270-os-release\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.567900 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.568766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/30b516bc-ab92-49fb-8f3b-431cf0ef3164-multus-daemon-config\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.569513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-config\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.575737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovn-node-metrics-cert\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.589563 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.599966 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6vrs\" (UniqueName: \"kubernetes.io/projected/30b516bc-ab92-49fb-8f3b-431cf0ef3164-kube-api-access-q6vrs\") pod \"multus-x7cgx\" (UID: \"30b516bc-ab92-49fb-8f3b-431cf0ef3164\") " pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.599976 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mllp7\" (UniqueName: \"kubernetes.io/projected/3551771a-22ef-4f85-ad6b-fa4033a3f90f-kube-api-access-mllp7\") pod \"ovnkube-node-8jj7j\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.602687 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxq2g\" (UniqueName: \"kubernetes.io/projected/13fee0e7-46f3-4e78-ac37-0764b073f270-kube-api-access-wxq2g\") pod \"multus-additional-cni-plugins-x4ggp\" (UID: \"13fee0e7-46f3-4e78-ac37-0764b073f270\") " pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.612861 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.626561 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.637409 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.648981 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.657130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.657174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.657185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.657203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.657216 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.659879 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.666290 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:14 crc kubenswrapper[4813]: E1202 10:08:14.666817 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:08:15.666799221 +0000 UTC m=+19.861973523 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.671922 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.677423 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-x7cgx" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.683305 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: W1202 10:08:14.696470 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30b516bc_ab92_49fb_8f3b_431cf0ef3164.slice/crio-913aaf8253adf7972f982c5de0e59083ebfa5f5600d50c599efe1970d9fd38ff WatchSource:0}: Error finding container 913aaf8253adf7972f982c5de0e59083ebfa5f5600d50c599efe1970d9fd38ff: Status 404 returned error can't find the container with id 913aaf8253adf7972f982c5de0e59083ebfa5f5600d50c599efe1970d9fd38ff Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.697844 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.706234 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.715457 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.726064 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.742028 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.760107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.760638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.760648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.760664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.760673 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.869851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.869914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.869927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.869946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.869961 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.971964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.972023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.972038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.972056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:14 crc kubenswrapper[4813]: I1202 10:08:14.972096 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:14Z","lastTransitionTime":"2025-12-02T10:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.067432 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.067591 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.074792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.074852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.074867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.074886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.074901 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.177196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.177586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.177600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.177619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.177630 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.208210 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerStarted","Data":"6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.208261 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerStarted","Data":"ee24d93e03b64171fec3510b6d2aa20a11fd241418a0a910a437c5b66e6b6158"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.210056 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d" exitCode=0 Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.210167 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.210201 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"cf723e6189d81899009071094f2bd195b6e42d389fdd5df1a6deb3e4dbc652b6"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.215053 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x7cgx" event={"ID":"30b516bc-ab92-49fb-8f3b-431cf0ef3164","Type":"ContainerStarted","Data":"c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.215148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x7cgx" event={"ID":"30b516bc-ab92-49fb-8f3b-431cf0ef3164","Type":"ContainerStarted","Data":"913aaf8253adf7972f982c5de0e59083ebfa5f5600d50c599efe1970d9fd38ff"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.228406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.228475 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.228489 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"25af7c4d7444275336345b04e7b0d8ce6fce4ca21103d7cbce3577b82e19e741"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.230389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-97mdk" event={"ID":"3fbb40e6-955d-4ba1-b48f-e535ed20494d","Type":"ContainerStarted","Data":"3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.230459 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-97mdk" event={"ID":"3fbb40e6-955d-4ba1-b48f-e535ed20494d","Type":"ContainerStarted","Data":"3f56132c7b2036f4d292a000f1d44aaa0190ce76e27b255b170ff5c9e03f9bd1"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.232316 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.232354 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.232367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d51e489193ae5120f230721678437c187097271e469aa5467ab54342b15d8835"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.239502 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.240764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.240835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"39c9019a64155f965831a4002c13af4d9d55b1769ed7105d96d7cfc0c241195d"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.242712 4813 scope.go:117] "RemoveContainer" containerID="549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.243086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ac0ae0c1d3f0174d3d37d3f0d2e44ee559535cdb5cbf921ef3f4b151dcb2f776"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.290325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.290384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.290395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.290416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.290431 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.294531 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.324516 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.352863 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.377636 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.396106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.396156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.396167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.396189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.396203 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.396855 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.422471 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.443958 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.462448 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.477165 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.493287 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.506720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.506782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.506794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.506811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.506824 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.552950 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.577783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.577839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.577872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.577892 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578035 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578052 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578064 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578081 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578127 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578104 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578207 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578222 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578137 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:17.578122286 +0000 UTC m=+21.773296588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578292 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:17.57825459 +0000 UTC m=+21.773428892 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578323 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:17.578312882 +0000 UTC m=+21.773487184 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.578347 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:17.578337893 +0000 UTC m=+21.773512195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.600391 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.609248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.609295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.609305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.609321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.609340 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.622030 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.636309 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.648660 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.670856 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.680201 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:15 crc kubenswrapper[4813]: E1202 10:08:15.680487 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:08:17.680428869 +0000 UTC m=+21.875603261 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.685774 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.703800 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.711676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.711721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.711733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.711751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.711766 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.724617 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.741028 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.754482 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.771136 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.797526 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.817306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.817348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.817373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.817389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.817402 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.819324 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.834028 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:15Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.919739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.919787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.919799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.919817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:15 crc kubenswrapper[4813]: I1202 10:08:15.919849 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:15Z","lastTransitionTime":"2025-12-02T10:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.024544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.024593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.024607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.024622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.024634 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.067481 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.067534 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:16 crc kubenswrapper[4813]: E1202 10:08:16.067615 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:16 crc kubenswrapper[4813]: E1202 10:08:16.067708 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.071649 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.072498 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.074016 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.074880 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.075930 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.076497 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.077138 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.078678 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.079558 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.082014 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.083021 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.084192 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.084723 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.085438 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.086140 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.087423 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.088113 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.089569 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.090191 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.090852 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.092030 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.092631 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.093904 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.094623 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.099233 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.099728 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.100371 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.101181 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.103782 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.104414 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.105564 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.106160 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.107280 4813 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.107392 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.110459 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.111858 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.112581 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.114562 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.115489 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.116684 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.117496 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.118780 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.119323 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.120452 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.122209 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.123410 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.123986 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.125177 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.125642 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.126217 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.127182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.127223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.127234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.127251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.127262 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.127960 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.128607 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.130358 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.130941 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.132129 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.132836 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.133399 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.140060 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.159634 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.174524 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.192258 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.205414 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.220401 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.229999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.230049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.230060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.230094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.230107 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.234618 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.247557 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.247809 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.249757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.250151 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.252047 4813 generic.go:334] "Generic (PLEG): container finished" podID="13fee0e7-46f3-4e78-ac37-0764b073f270" containerID="6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79" exitCode=0 Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.252096 4813 generic.go:334] "Generic (PLEG): container finished" podID="13fee0e7-46f3-4e78-ac37-0764b073f270" containerID="97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239" exitCode=0 Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.252156 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerDied","Data":"6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.252276 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerDied","Data":"97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.255543 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.255581 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.255593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.255603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.255616 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.264695 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.306660 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.333121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.333182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.333195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.333228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.333241 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.337496 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.376177 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.417435 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.439529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.439581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.439594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.439613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.439629 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.460784 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.502153 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.539903 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.542268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.542303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.542317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.542333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.542344 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.577457 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.618905 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.645319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.645373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.645384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.645435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.645451 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.656140 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.699467 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.741052 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.748433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.748476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.748488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.748505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.748517 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.785387 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.813247 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.851730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.851789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.851804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.851827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.851842 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.954300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.954351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.954367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.954388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:16 crc kubenswrapper[4813]: I1202 10:08:16.954404 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:16Z","lastTransitionTime":"2025-12-02T10:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.057390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.057442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.057454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.057471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.057483 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.067748 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.067885 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.161179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.161221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.161232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.161250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.161262 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.260972 4813 generic.go:334] "Generic (PLEG): container finished" podID="13fee0e7-46f3-4e78-ac37-0764b073f270" containerID="d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c" exitCode=0 Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.261048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerDied","Data":"d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.262829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.262864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.262873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.262886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.262896 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.265149 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.273270 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.286519 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.300868 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.315984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.330424 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.342746 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.365891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.365957 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.365969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.365986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.366034 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.365627 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.384748 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.403307 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.416351 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.441331 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.455993 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.468771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.468824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.468837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.468860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.468879 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.472572 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:17Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.571746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.572034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.572282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.572308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.572322 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.601299 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.601369 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.601411 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.601441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601572 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601570 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601709 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:21.601681211 +0000 UTC m=+25.796855513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601572 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601741 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601754 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601807 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:21.601791804 +0000 UTC m=+25.796966106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601573 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601841 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:21.601835715 +0000 UTC m=+25.797010007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601600 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601854 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.601873 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:21.601868026 +0000 UTC m=+25.797042328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.676961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.677030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.677046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.677094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.677107 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.702423 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:17 crc kubenswrapper[4813]: E1202 10:08:17.702688 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:08:21.702649122 +0000 UTC m=+25.897823424 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.779458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.779508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.779521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.779540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.779558 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.882040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.882101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.882112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.882127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.882141 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.985471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.985514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.985534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.985555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:17 crc kubenswrapper[4813]: I1202 10:08:17.985566 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:17Z","lastTransitionTime":"2025-12-02T10:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.067811 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.067891 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:18 crc kubenswrapper[4813]: E1202 10:08:18.068003 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:18 crc kubenswrapper[4813]: E1202 10:08:18.068220 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.088824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.088876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.088890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.088908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.088920 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.191884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.191964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.191974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.191991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.192007 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.269215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.271785 4813 generic.go:334] "Generic (PLEG): container finished" podID="13fee0e7-46f3-4e78-ac37-0764b073f270" containerID="dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9" exitCode=0 Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.271823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerDied","Data":"dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.283248 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.295124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.295169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.295183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.295201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.295217 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.296293 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.314648 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.327846 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.340113 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.362055 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.379894 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.393424 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.398621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.398662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.398673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.398689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.398700 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.406645 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.423984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.449905 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.479209 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.493268 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.504989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.505048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.505061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.505100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.505121 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.519300 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.532101 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.547687 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.562103 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.582911 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.597448 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.608163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.608206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.608217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.608233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.608247 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.620904 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.640858 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.654768 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.670399 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.688114 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.701477 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.711007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.711050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.711061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.711094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.711105 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.714493 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:18Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.813818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.813854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.813863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.813878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.813897 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.917605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.917791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.917819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.917842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:18 crc kubenswrapper[4813]: I1202 10:08:18.917858 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:18Z","lastTransitionTime":"2025-12-02T10:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.021501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.021561 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.021575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.021594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.021607 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.067333 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:19 crc kubenswrapper[4813]: E1202 10:08:19.067514 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.124686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.124742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.124757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.124778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.124790 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.227773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.227851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.227866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.227890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.227904 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.279918 4813 generic.go:334] "Generic (PLEG): container finished" podID="13fee0e7-46f3-4e78-ac37-0764b073f270" containerID="83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348" exitCode=0 Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.280018 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerDied","Data":"83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.288453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.297670 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.316364 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.330614 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.330758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.330799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.330810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.330832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.330845 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.343904 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.365540 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.384779 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.403787 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.420276 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.433656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.433698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.433709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.433730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.433743 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.434369 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.447791 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.462303 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.482765 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.499952 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:19Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.536462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.536522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.536535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.536556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.536574 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.639530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.639581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.639593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.639611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.639622 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.742983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.743028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.743036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.743050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.743060 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.846169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.846214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.846246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.846261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.846273 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.950479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.950557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.950577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.950605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:19 crc kubenswrapper[4813]: I1202 10:08:19.950628 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:19Z","lastTransitionTime":"2025-12-02T10:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.025762 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8f9dg"] Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.026515 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.029526 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.030013 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.030173 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.032360 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.046946 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.053103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.053165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.053178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.053197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.053212 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.067352 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:20 crc kubenswrapper[4813]: E1202 10:08:20.067564 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.068202 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:20 crc kubenswrapper[4813]: E1202 10:08:20.068496 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.068754 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.086764 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.102404 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.120965 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.127459 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/77042011-320e-4ef3-839b-013ae0e97908-serviceca\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.127526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wjjr\" (UniqueName: \"kubernetes.io/projected/77042011-320e-4ef3-839b-013ae0e97908-kube-api-access-2wjjr\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.127566 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77042011-320e-4ef3-839b-013ae0e97908-host\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.133857 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.148426 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.155727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.155769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.155782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.155801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.155814 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.171120 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.188766 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.208474 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.226159 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.228543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wjjr\" (UniqueName: \"kubernetes.io/projected/77042011-320e-4ef3-839b-013ae0e97908-kube-api-access-2wjjr\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.228621 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77042011-320e-4ef3-839b-013ae0e97908-host\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.228660 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/77042011-320e-4ef3-839b-013ae0e97908-serviceca\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.228760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77042011-320e-4ef3-839b-013ae0e97908-host\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.230156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/77042011-320e-4ef3-839b-013ae0e97908-serviceca\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.241342 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.247875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wjjr\" (UniqueName: \"kubernetes.io/projected/77042011-320e-4ef3-839b-013ae0e97908-kube-api-access-2wjjr\") pod \"node-ca-8f9dg\" (UID: \"77042011-320e-4ef3-839b-013ae0e97908\") " pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.258245 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.259614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.259665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.259681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.259708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.259731 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.275212 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.295030 4813 generic.go:334] "Generic (PLEG): container finished" podID="13fee0e7-46f3-4e78-ac37-0764b073f270" containerID="5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef" exitCode=0 Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.295103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerDied","Data":"5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.312099 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.333905 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.348557 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.350853 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8f9dg" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.363439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.363484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.363499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.363518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.363532 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.365244 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: W1202 10:08:20.366274 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77042011_320e_4ef3_839b_013ae0e97908.slice/crio-dd0374aeab714bf3f6a9b144af3d1ea4efbcf00d165d5565d93169ea65168bed WatchSource:0}: Error finding container dd0374aeab714bf3f6a9b144af3d1ea4efbcf00d165d5565d93169ea65168bed: Status 404 returned error can't find the container with id dd0374aeab714bf3f6a9b144af3d1ea4efbcf00d165d5565d93169ea65168bed Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.387966 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.406487 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.430464 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.445785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.459722 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.466204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.466266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.466286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.466310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.466328 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.479015 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.490889 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.505542 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.521528 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.533461 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:20Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.569438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.569493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.569505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.569526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.569540 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.673441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.673495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.673512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.673541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.673559 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.777214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.777263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.777275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.777292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.777307 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.880992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.881042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.881056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.881090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.881106 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.983601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.983685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.983746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.983781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:20 crc kubenswrapper[4813]: I1202 10:08:20.983804 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:20Z","lastTransitionTime":"2025-12-02T10:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.067802 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.068019 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.090870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.090932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.090949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.090975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.091183 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.194010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.194058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.194106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.194131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.194151 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.297159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.297203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.297217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.297238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.297256 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.304826 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" event={"ID":"13fee0e7-46f3-4e78-ac37-0764b073f270","Type":"ContainerStarted","Data":"e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.313774 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.314580 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.314634 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.317006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8f9dg" event={"ID":"77042011-320e-4ef3-839b-013ae0e97908","Type":"ContainerStarted","Data":"563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.317026 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8f9dg" event={"ID":"77042011-320e-4ef3-839b-013ae0e97908","Type":"ContainerStarted","Data":"dd0374aeab714bf3f6a9b144af3d1ea4efbcf00d165d5565d93169ea65168bed"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.323977 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.337650 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.339334 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.345709 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.351146 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.361519 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.373180 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.382150 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.393486 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.400301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.400368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.400382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.400402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.400414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.404617 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.415847 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.438087 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.454761 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.468536 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.482174 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.493366 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.503886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.503934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.503948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.503963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.503974 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.507961 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.524540 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.539331 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.551806 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.568247 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.580457 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.596445 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.607584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.607632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.607643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.607659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.607669 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.622313 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.634514 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.643470 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.643522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.643547 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.643584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643652 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643684 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643725 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643773 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643794 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643824 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643894 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643930 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643744 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:29.643722562 +0000 UTC m=+33.838896864 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643967 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:29.643952909 +0000 UTC m=+33.839127221 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.643996 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:29.64398555 +0000 UTC m=+33.839159852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.644015 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:29.644005971 +0000 UTC m=+33.839180273 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.656966 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.670604 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.688317 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.711001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.711049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.711063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.711104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.711121 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.717263 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.730852 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:21Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.744207 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:21 crc kubenswrapper[4813]: E1202 10:08:21.744421 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:08:29.744397325 +0000 UTC m=+33.939571647 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.814923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.814990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.815013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.815035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.815051 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.917854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.917897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.917908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.917927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:21 crc kubenswrapper[4813]: I1202 10:08:21.917940 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:21Z","lastTransitionTime":"2025-12-02T10:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.021108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.021179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.021191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.021210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.021224 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.066900 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.066916 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:22 crc kubenswrapper[4813]: E1202 10:08:22.067141 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:22 crc kubenswrapper[4813]: E1202 10:08:22.067361 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.124607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.124681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.124719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.124756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.124786 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.227827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.227889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.227904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.227922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.227934 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.320250 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.330598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.330660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.330677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.330703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.330722 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.434012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.434066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.434131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.434160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.434182 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.537173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.537230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.537247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.537266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.537287 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.640303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.640340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.640348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.640363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.640375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.743534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.743569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.743584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.743600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.743615 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.846590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.846635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.846644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.846657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.846668 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.949792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.949851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.949875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.949900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:22 crc kubenswrapper[4813]: I1202 10:08:22.950266 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:22Z","lastTransitionTime":"2025-12-02T10:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.057357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.057410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.057423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.057445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.057462 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.066726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:23 crc kubenswrapper[4813]: E1202 10:08:23.066861 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.160533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.160574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.160582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.160598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.160608 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.263051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.263121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.263135 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.263150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.263161 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.323541 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.365956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.365996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.366008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.366024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.366036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.471836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.471893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.471907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.471928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.471943 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.574993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.575054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.575093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.575113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.575127 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.678252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.678313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.678325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.678344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.678361 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.781036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.781100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.781112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.781129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.781142 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.883587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.883639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.883650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.883665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.883676 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.986861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.986920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.986929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.986947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:23 crc kubenswrapper[4813]: I1202 10:08:23.986957 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:23Z","lastTransitionTime":"2025-12-02T10:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.067506 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.067591 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:24 crc kubenswrapper[4813]: E1202 10:08:24.067727 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:24 crc kubenswrapper[4813]: E1202 10:08:24.067979 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.090294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.090415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.090442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.090479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.090508 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.193260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.193314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.193325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.193342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.193354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.296118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.296193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.296209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.296230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.296244 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.329720 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/0.log" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.332876 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466" exitCode=1 Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.332926 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.333608 4813 scope.go:117] "RemoveContainer" containerID="25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.350105 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.360601 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.376352 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.395096 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.399997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.400062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.400114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.400135 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.400153 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.412997 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.427369 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.444805 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.460771 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.473879 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.486923 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.499667 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.503137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.503188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.503200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.503219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.503231 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.514305 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.527913 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.538496 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.605881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.605931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.605943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.605957 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.605968 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.630005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.630104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.630124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.630148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.630164 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: E1202 10:08:24.644579 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.649357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.649405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.649418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.649435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.649446 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: E1202 10:08:24.661368 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.666405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.666473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.666490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.666514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.666533 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: E1202 10:08:24.680237 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.684258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.684315 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.684328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.684346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.684360 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: E1202 10:08:24.700053 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.704725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.704770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.704782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.704795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.704805 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: E1202 10:08:24.723358 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:24Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:24 crc kubenswrapper[4813]: E1202 10:08:24.723952 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.726002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.726035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.726051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.726099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.726113 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.828848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.828906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.828920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.828942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.828954 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.932165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.932217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.932227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.932246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:24 crc kubenswrapper[4813]: I1202 10:08:24.932258 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:24Z","lastTransitionTime":"2025-12-02T10:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.035969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.036007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.036017 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.036032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.036042 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.067537 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:25 crc kubenswrapper[4813]: E1202 10:08:25.067706 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.139016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.139100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.139114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.139132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.139144 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.241639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.241687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.241698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.241716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.241728 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.339097 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/0.log" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.342778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.342957 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.343572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.343614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.343626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.343646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.343660 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.356801 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.371506 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.383216 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.397364 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.413268 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.427120 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.446605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.446642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.446651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.446665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.446674 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.449729 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.468950 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.486828 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.501237 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.516624 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.532129 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.547429 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.549418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.549458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.549470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.549489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.549501 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.560510 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:25Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.651985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.652111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.652155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.652187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.652209 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.755529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.755569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.755579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.755598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.755612 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.858513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.858566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.858575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.858590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.858602 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.962368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.962417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.962430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.962449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:25 crc kubenswrapper[4813]: I1202 10:08:25.962463 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:25Z","lastTransitionTime":"2025-12-02T10:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.065331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.065408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.065422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.065438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.065450 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.067631 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:26 crc kubenswrapper[4813]: E1202 10:08:26.067756 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.067814 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:26 crc kubenswrapper[4813]: E1202 10:08:26.067992 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.090493 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.110910 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.128533 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.146172 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.164713 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.167565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.167608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.167621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.167642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.167660 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.179781 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.196619 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.211324 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.224125 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.249525 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.267361 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.270097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.270160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.270182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.270206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.270223 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.283358 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.295247 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.309449 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.348248 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/1.log" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.349325 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/0.log" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.352711 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f" exitCode=1 Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.352759 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.352821 4813 scope.go:117] "RemoveContainer" containerID="25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.353820 4813 scope.go:117] "RemoveContainer" containerID="66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f" Dec 02 10:08:26 crc kubenswrapper[4813]: E1202 10:08:26.354024 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.369152 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.373477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.373601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.373618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.373636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.373650 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.387737 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.401438 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.417300 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.433298 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.452405 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.466155 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.480364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.480414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.480428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.480446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.480459 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.484679 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.496762 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.509026 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.522135 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.536104 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.551616 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.562026 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.584505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.584563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.584573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.584591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.584604 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.688439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.688499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.688509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.688526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.688537 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.792111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.792175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.792191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.792210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.792223 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.895576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.895633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.895646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.895661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.895674 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.998453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.998530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.998548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.998572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:26 crc kubenswrapper[4813]: I1202 10:08:26.998592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:26Z","lastTransitionTime":"2025-12-02T10:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.067552 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:27 crc kubenswrapper[4813]: E1202 10:08:27.067806 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.101726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.101783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.101799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.101816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.101827 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.103888 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff"] Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.104528 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.107093 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.109249 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.129643 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.153845 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.173704 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.192969 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.203193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fbbe6fd-3820-474c-af83-dc3efb10dea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.203377 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fbbe6fd-3820-474c-af83-dc3efb10dea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.203484 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d2tn\" (UniqueName: \"kubernetes.io/projected/2fbbe6fd-3820-474c-af83-dc3efb10dea5-kube-api-access-8d2tn\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.203643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fbbe6fd-3820-474c-af83-dc3efb10dea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.205696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.205737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.205755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.205782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.205799 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.208208 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.224265 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.242323 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.261901 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.276560 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.290788 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.305298 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fbbe6fd-3820-474c-af83-dc3efb10dea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.305389 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d2tn\" (UniqueName: \"kubernetes.io/projected/2fbbe6fd-3820-474c-af83-dc3efb10dea5-kube-api-access-8d2tn\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.305430 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fbbe6fd-3820-474c-af83-dc3efb10dea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.305511 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fbbe6fd-3820-474c-af83-dc3efb10dea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.306631 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fbbe6fd-3820-474c-af83-dc3efb10dea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.306793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fbbe6fd-3820-474c-af83-dc3efb10dea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.308272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.308318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.308335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.308358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.308376 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.315495 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fbbe6fd-3820-474c-af83-dc3efb10dea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.317118 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.338924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d2tn\" (UniqueName: \"kubernetes.io/projected/2fbbe6fd-3820-474c-af83-dc3efb10dea5-kube-api-access-8d2tn\") pod \"ovnkube-control-plane-749d76644c-7fjff\" (UID: \"2fbbe6fd-3820-474c-af83-dc3efb10dea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.339157 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.358759 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/1.log" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.358785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.379272 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.400230 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:27Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.411616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.411678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.411694 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.411738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.411753 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.426495 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.515258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.515321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.515334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.515355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.515369 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.618035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.618110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.618125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.618144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.618157 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.721443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.721499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.721514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.721533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.721549 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.825263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.825352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.825375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.825406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.825432 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.928554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.928629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.928655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.928687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:27 crc kubenswrapper[4813]: I1202 10:08:27.928712 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:27Z","lastTransitionTime":"2025-12-02T10:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.033060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.033460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.033471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.033494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.033507 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.067411 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.067496 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:28 crc kubenswrapper[4813]: E1202 10:08:28.067617 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:28 crc kubenswrapper[4813]: E1202 10:08:28.067754 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.136364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.136417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.136430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.136450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.136464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.239267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.239320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.239340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.239357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.239368 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.274235 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.285284 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.298144 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.308984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.321995 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.336625 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.341214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.341248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.341260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.341280 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.341293 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.348596 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.361996 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.367186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" event={"ID":"2fbbe6fd-3820-474c-af83-dc3efb10dea5","Type":"ContainerStarted","Data":"0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.367284 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" event={"ID":"2fbbe6fd-3820-474c-af83-dc3efb10dea5","Type":"ContainerStarted","Data":"0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.367298 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" event={"ID":"2fbbe6fd-3820-474c-af83-dc3efb10dea5","Type":"ContainerStarted","Data":"faa588340c1a93b5edc7c7250c04b10844e5f44f1a9d3f1b9e0a3ee11b3ba73c"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.383733 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.401095 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.414781 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.430396 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.446632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.446672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.446692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.446722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.447031 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.447562 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.459097 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.474956 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.487578 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.499030 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.509509 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.519395 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.533622 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.549904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.549942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.549952 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.549966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.549976 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.557056 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.574913 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.588809 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.604909 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.613688 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-62bfc"] Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.614295 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:28 crc kubenswrapper[4813]: E1202 10:08:28.614373 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.617869 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.632597 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.647047 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.652360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.652396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.652406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.652422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.652435 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.661852 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.675811 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.688463 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.699896 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.717158 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.719867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.719941 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwzbb\" (UniqueName: \"kubernetes.io/projected/05bb9583-6b23-4207-b709-89dfe49fad73-kube-api-access-wwzbb\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.730859 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.747056 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.754664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.754714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.754732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.754750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.754762 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.762038 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.774513 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.787519 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.811109 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.820683 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.820752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwzbb\" (UniqueName: \"kubernetes.io/projected/05bb9583-6b23-4207-b709-89dfe49fad73-kube-api-access-wwzbb\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:28 crc kubenswrapper[4813]: E1202 10:08:28.820879 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:28 crc kubenswrapper[4813]: E1202 10:08:28.820951 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs podName:05bb9583-6b23-4207-b709-89dfe49fad73 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:29.320933947 +0000 UTC m=+33.516108249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs") pod "network-metrics-daemon-62bfc" (UID: "05bb9583-6b23-4207-b709-89dfe49fad73") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.825771 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.836234 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.841593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwzbb\" (UniqueName: \"kubernetes.io/projected/05bb9583-6b23-4207-b709-89dfe49fad73-kube-api-access-wwzbb\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.851263 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.857371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.857433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.857442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.857458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.857469 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.864637 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.876603 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.888743 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.902264 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.913692 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.923801 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:28Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.963489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.964437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.964518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.964864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:28 crc kubenswrapper[4813]: I1202 10:08:28.964941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:28Z","lastTransitionTime":"2025-12-02T10:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.066919 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.067065 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.069861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.069948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.069968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.069998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.070019 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.173723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.173806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.173838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.173867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.173891 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.276861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.276918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.276936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.276959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.276977 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.326948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.327222 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.327327 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs podName:05bb9583-6b23-4207-b709-89dfe49fad73 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:30.327300351 +0000 UTC m=+34.522474683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs") pod "network-metrics-daemon-62bfc" (UID: "05bb9583-6b23-4207-b709-89dfe49fad73") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.380171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.380245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.380269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.380299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.380322 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.483544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.483599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.483613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.483632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.483643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.587153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.587900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.587969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.588010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.588038 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.691500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.691574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.691595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.691622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.691645 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.731392 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.731456 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.731497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.731522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.731672 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.731706 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.731723 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.731783 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:45.731765606 +0000 UTC m=+49.926939908 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.732206 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.732228 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.732240 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.732283 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:45.732266041 +0000 UTC m=+49.927440343 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.732382 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.732460 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.732553 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:45.732514449 +0000 UTC m=+49.927688791 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.732595 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:45.732568351 +0000 UTC m=+49.927742663 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.796369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.796411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.796426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.796445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.796460 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.832379 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:29 crc kubenswrapper[4813]: E1202 10:08:29.832647 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:08:45.832602484 +0000 UTC m=+50.027776826 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.900014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.900105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.900122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.900141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:29 crc kubenswrapper[4813]: I1202 10:08:29.900157 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:29Z","lastTransitionTime":"2025-12-02T10:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.004395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.004500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.004517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.004540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.004556 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.067751 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.067834 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:30 crc kubenswrapper[4813]: E1202 10:08:30.068025 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:30 crc kubenswrapper[4813]: E1202 10:08:30.068304 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.108327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.108409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.108430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.108459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.108480 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.213253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.213320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.213333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.213356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.213375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.316091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.316147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.316163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.316190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.316204 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.338154 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:30 crc kubenswrapper[4813]: E1202 10:08:30.338420 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:30 crc kubenswrapper[4813]: E1202 10:08:30.338547 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs podName:05bb9583-6b23-4207-b709-89dfe49fad73 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:32.338516815 +0000 UTC m=+36.533691137 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs") pod "network-metrics-daemon-62bfc" (UID: "05bb9583-6b23-4207-b709-89dfe49fad73") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.419618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.420003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.420132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.420241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.420333 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.523400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.523468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.523487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.523513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.523531 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.627687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.627784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.627816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.627854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.627880 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.731476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.731554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.731578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.731615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.731641 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.835773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.835847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.835864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.835890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.835909 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.939308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.939369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.939394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.939422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:30 crc kubenswrapper[4813]: I1202 10:08:30.939439 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:30Z","lastTransitionTime":"2025-12-02T10:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.042733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.042824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.042852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.042882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.042904 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.067662 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.067677 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:31 crc kubenswrapper[4813]: E1202 10:08:31.067890 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:31 crc kubenswrapper[4813]: E1202 10:08:31.068150 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.151045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.151166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.151246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.151388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.151438 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.255270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.255350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.255375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.255405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.255432 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.358394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.358444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.358455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.358472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.358484 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.461653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.461726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.461751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.461780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.461803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.565911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.566001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.566037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.566117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.566156 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.669479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.669541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.669561 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.669595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.669631 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.773212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.773287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.773308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.773335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.773354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.876667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.876741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.876761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.876789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.876807 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.980369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.980414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.980428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.980448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:31 crc kubenswrapper[4813]: I1202 10:08:31.980462 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:31Z","lastTransitionTime":"2025-12-02T10:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.066913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.066960 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:32 crc kubenswrapper[4813]: E1202 10:08:32.067130 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:32 crc kubenswrapper[4813]: E1202 10:08:32.067295 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.084117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.084176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.084190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.084213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.084236 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.187011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.187059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.187097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.187121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.187134 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.289896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.289946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.289962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.289984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.290002 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.363455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:32 crc kubenswrapper[4813]: E1202 10:08:32.363717 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:32 crc kubenswrapper[4813]: E1202 10:08:32.363848 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs podName:05bb9583-6b23-4207-b709-89dfe49fad73 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:36.363819064 +0000 UTC m=+40.558993586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs") pod "network-metrics-daemon-62bfc" (UID: "05bb9583-6b23-4207-b709-89dfe49fad73") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.392592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.392645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.392661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.392681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.392696 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.496448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.496531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.496554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.496582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.496602 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.600022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.600132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.600174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.600216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.600241 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.703545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.703621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.703635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.703656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.703670 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.807212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.807270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.807287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.807310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.807327 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.910042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.910111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.910124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.910145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:32 crc kubenswrapper[4813]: I1202 10:08:32.910157 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:32Z","lastTransitionTime":"2025-12-02T10:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.012781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.013264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.013356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.013447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.013546 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.067614 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.067737 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:33 crc kubenswrapper[4813]: E1202 10:08:33.068524 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:33 crc kubenswrapper[4813]: E1202 10:08:33.068270 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.116882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.116937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.116960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.116981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.116995 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.220149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.220240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.220266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.220301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.220328 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.328794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.328871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.328896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.328929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.328953 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.432871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.432937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.432949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.432968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.432980 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.536536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.536619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.536630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.536649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.536663 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.640485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.640564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.640577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.640600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.640613 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.743937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.743997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.744008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.744027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.744040 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.847126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.847196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.847213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.847236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.847259 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.949783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.949822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.949831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.949861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:33 crc kubenswrapper[4813]: I1202 10:08:33.949872 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:33Z","lastTransitionTime":"2025-12-02T10:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.053157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.053238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.053260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.053292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.053316 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.067736 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.067832 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:34 crc kubenswrapper[4813]: E1202 10:08:34.067930 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:34 crc kubenswrapper[4813]: E1202 10:08:34.068001 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.158105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.158251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.158279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.158311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.158333 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.262304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.262404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.262427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.262452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.262477 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.365598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.365649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.365686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.365707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.365720 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.468312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.468355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.468366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.468383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.468395 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.571915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.571979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.571997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.572022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.572041 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.675679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.675751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.675767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.675791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.675810 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.778708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.778784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.778804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.778832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.778854 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.881765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.881814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.881825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.881842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.881853 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.986042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.986125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.986147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.986175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.986195 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.991335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.991379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.991391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.991408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:34 crc kubenswrapper[4813]: I1202 10:08:34.991421 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:34Z","lastTransitionTime":"2025-12-02T10:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: E1202 10:08:35.006393 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:35Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.010798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.010847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.010857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.010877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.010886 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: E1202 10:08:35.025183 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:35Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.036312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.036366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.036380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.036398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.036412 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: E1202 10:08:35.050907 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:35Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.056186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.056266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.056290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.056321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.056346 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.067046 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.067117 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:35 crc kubenswrapper[4813]: E1202 10:08:35.067217 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:35 crc kubenswrapper[4813]: E1202 10:08:35.067376 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:35 crc kubenswrapper[4813]: E1202 10:08:35.076877 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:35Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.081686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.081741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.081756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.081774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.081788 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: E1202 10:08:35.096670 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:35Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:35 crc kubenswrapper[4813]: E1202 10:08:35.096813 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.098504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.098545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.098556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.098629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.098642 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.202448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.202525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.202551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.202583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.202608 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.306585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.306665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.306691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.306718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.306739 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.409936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.410415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.410428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.410447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.410461 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.513278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.513347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.513359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.513377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.513388 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.617469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.617542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.617565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.617593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.617613 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.721473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.721565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.721599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.721638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.721664 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.824925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.824964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.824976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.824993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.825006 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.928869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.928921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.928931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.928948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:35 crc kubenswrapper[4813]: I1202 10:08:35.928958 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:35Z","lastTransitionTime":"2025-12-02T10:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.031509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.031554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.031566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.031586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.031608 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.067861 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.067994 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:36 crc kubenswrapper[4813]: E1202 10:08:36.068240 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:36 crc kubenswrapper[4813]: E1202 10:08:36.068363 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.093029 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.108834 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.128571 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.135003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.135088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.135102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.135120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.135133 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.145119 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.162978 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.185807 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25d115e9d7a7ae45346d6ce2905094594485e585ca9da27ac3be8a6d8f6b3466\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:23Z\\\",\\\"message\\\":\\\"23.849550 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.849769 6084 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 10:08:23.850564 6084 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 10:08:23.850632 6084 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 10:08:23.850646 6084 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 10:08:23.850661 6084 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 10:08:23.850678 6084 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 10:08:23.850702 6084 factory.go:656] Stopping watch factory\\\\nI1202 10:08:23.850725 6084 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 10:08:23.850735 6084 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 10:08:23.850745 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 10:08:23.850755 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1202 10:08:23.850762 6084 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.200255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.214787 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.231329 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.238719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.238801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.238826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.238860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.238887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.246146 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.262320 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.277939 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.289906 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.304693 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.317426 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.326354 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:36Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.341337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.341389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.341435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.341459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.341475 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.408041 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:36 crc kubenswrapper[4813]: E1202 10:08:36.408359 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:36 crc kubenswrapper[4813]: E1202 10:08:36.408544 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs podName:05bb9583-6b23-4207-b709-89dfe49fad73 nodeName:}" failed. No retries permitted until 2025-12-02 10:08:44.408501815 +0000 UTC m=+48.603676157 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs") pod "network-metrics-daemon-62bfc" (UID: "05bb9583-6b23-4207-b709-89dfe49fad73") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.444839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.444935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.444951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.444971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.444985 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.548196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.548330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.548361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.548393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.548417 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.652774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.652863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.652919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.652955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.652978 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.757009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.757154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.757180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.757207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.757230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.860989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.861041 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.861053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.861088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.861102 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.965021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.965097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.965108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.965126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:36 crc kubenswrapper[4813]: I1202 10:08:36.965136 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:36Z","lastTransitionTime":"2025-12-02T10:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.066840 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.067099 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:37 crc kubenswrapper[4813]: E1202 10:08:37.067274 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:37 crc kubenswrapper[4813]: E1202 10:08:37.067500 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.069298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.069390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.069423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.069452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.069475 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.173433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.173526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.173568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.173605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.173630 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.276244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.276294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.276305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.276324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.276336 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.379505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.379562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.379572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.379591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.379609 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.483729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.483797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.483822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.483850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.483873 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.587796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.587870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.587893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.587920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.587941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.691262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.691346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.691361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.691379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.691391 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.794221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.794276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.794293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.794312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.794325 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.897275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.897330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.897341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.897358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:37 crc kubenswrapper[4813]: I1202 10:08:37.897368 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:37Z","lastTransitionTime":"2025-12-02T10:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.000859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.001414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.001553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.001720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.001844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.067129 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.067158 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:38 crc kubenswrapper[4813]: E1202 10:08:38.068228 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:38 crc kubenswrapper[4813]: E1202 10:08:38.068365 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.068630 4813 scope.go:117] "RemoveContainer" containerID="66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.086862 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.105207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.105255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.105268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.105290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.105304 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.106261 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.126127 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.141123 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.165036 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.182116 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.195275 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.209458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.209522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.209535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.209553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.209565 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.211195 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.235500 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.257240 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.271403 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.289545 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.308039 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.312918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.312992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.313015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.313045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.313108 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.324903 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.346819 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.367802 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:38Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.415061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.415118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.415127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.415143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.415155 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.519547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.519607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.519622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.519643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.519664 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.622780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.622823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.622833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.622849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.622860 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.725776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.725856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.725869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.725886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.725898 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.828850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.828908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.828923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.828945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.828955 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.932108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.932167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.932182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.932204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:38 crc kubenswrapper[4813]: I1202 10:08:38.932214 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:38Z","lastTransitionTime":"2025-12-02T10:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.035910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.035962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.035975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.035997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.036012 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.067497 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.067525 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:39 crc kubenswrapper[4813]: E1202 10:08:39.067667 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:39 crc kubenswrapper[4813]: E1202 10:08:39.067810 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.139845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.139912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.139929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.139955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.139972 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.243740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.243800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.243816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.243840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.243859 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.346721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.346786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.346799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.346817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.346830 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.417493 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/1.log" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.421497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.421714 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.439243 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.449971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.450019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.450033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.450060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.450093 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.458823 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.475533 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.490363 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.502816 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.525033 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.542101 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.553262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.553571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.553683 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.553752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.553814 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.556636 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.570878 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.586470 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.602311 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.614605 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.627432 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.640584 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.657168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.657220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.657234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.657258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.657271 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.657921 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.671299 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:39Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.759628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.759682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.759697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.759736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.759754 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.862908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.862981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.863000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.863023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.863040 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.970816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.970882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.970907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.970933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:39 crc kubenswrapper[4813]: I1202 10:08:39.970951 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:39Z","lastTransitionTime":"2025-12-02T10:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.067867 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.067993 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:40 crc kubenswrapper[4813]: E1202 10:08:40.068213 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:40 crc kubenswrapper[4813]: E1202 10:08:40.068421 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.073440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.073482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.073494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.073511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.073524 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.177655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.177739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.177758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.177784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.177802 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.281639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.281693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.281707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.281729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.281745 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.386257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.386319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.386332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.386356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.386369 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.429626 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/2.log" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.430733 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/1.log" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.435544 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce" exitCode=1 Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.435621 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.435735 4813 scope.go:117] "RemoveContainer" containerID="66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.437348 4813 scope.go:117] "RemoveContainer" containerID="9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce" Dec 02 10:08:40 crc kubenswrapper[4813]: E1202 10:08:40.437693 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.457321 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.477965 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.489507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.489580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.489603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.489644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.489670 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.498731 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.520380 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.534756 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.548040 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.561713 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.577861 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.592642 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.593649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.593697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.593709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.593726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.593740 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.609308 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.611115 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.629260 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.648764 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.679373 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66d7d4c1a5f008a5cb9275be329e3f1c6d35b37da0f9c6a3c63babf333baad8f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:26Z\\\",\\\"message\\\":\\\"lates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-machine-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.250\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1202 10:08:25.454823 6243 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.696268 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.696927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.696961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.696972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.696989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.697002 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.711562 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.727720 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:40Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.799974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.800120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.800149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.800184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.800207 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.903671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.903741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.903764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.903795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:40 crc kubenswrapper[4813]: I1202 10:08:40.903817 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:40Z","lastTransitionTime":"2025-12-02T10:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.007355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.007422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.007436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.007458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.007472 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.066875 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.067050 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:41 crc kubenswrapper[4813]: E1202 10:08:41.067194 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:41 crc kubenswrapper[4813]: E1202 10:08:41.067407 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.111058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.111195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.111224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.111256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.111280 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.215353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.215393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.215404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.215421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.215432 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.318936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.318987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.318997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.319017 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.319030 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.422915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.422963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.422981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.423000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.423013 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.447833 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/2.log" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.453107 4813 scope.go:117] "RemoveContainer" containerID="9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce" Dec 02 10:08:41 crc kubenswrapper[4813]: E1202 10:08:41.453386 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.474703 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.495286 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.515153 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.525953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.526298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.526491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.526592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.526674 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.532193 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.550770 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.565705 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.578279 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.596573 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.611945 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.623487 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.629413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.629463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.629479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.629500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.629515 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.644370 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.663024 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.676409 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.694145 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.705549 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.719514 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:41Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.732670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.732711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.732725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.732748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.732762 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.836031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.836110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.836125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.836144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.836157 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.939482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.939557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.939569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.939587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:41 crc kubenswrapper[4813]: I1202 10:08:41.939602 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:41Z","lastTransitionTime":"2025-12-02T10:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.042599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.042667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.042678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.042695 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.042706 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.067880 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.067968 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:42 crc kubenswrapper[4813]: E1202 10:08:42.068146 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:42 crc kubenswrapper[4813]: E1202 10:08:42.068370 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.146330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.146397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.146438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.146471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.146494 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.251246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.251314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.251339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.251374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.251398 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.354455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.354511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.354523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.354541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.354556 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.456519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.456591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.456609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.456632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.456651 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.560239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.560307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.560330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.560352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.560368 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.663925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.663982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.663995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.664011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.664023 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.766900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.766947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.766956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.766972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.766982 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.870217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.870312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.870333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.870357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.870377 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.973557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.973621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.973638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.973663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:42 crc kubenswrapper[4813]: I1202 10:08:42.973683 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:42Z","lastTransitionTime":"2025-12-02T10:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.067518 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.067573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:43 crc kubenswrapper[4813]: E1202 10:08:43.067685 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:43 crc kubenswrapper[4813]: E1202 10:08:43.067821 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.075815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.075875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.075893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.075917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.075936 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.179456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.179526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.179543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.179570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.179586 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.282686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.282747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.282759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.282779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.282793 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.386324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.386401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.386442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.386484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.386514 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.489587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.489668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.489693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.489724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.489749 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.593320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.593379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.593396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.593416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.593429 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.696457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.696511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.696520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.696539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.696552 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.799653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.799721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.799735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.799755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.799769 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.902415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.902462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.902472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.902486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:43 crc kubenswrapper[4813]: I1202 10:08:43.902499 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:43Z","lastTransitionTime":"2025-12-02T10:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.005108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.005158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.005169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.005187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.005200 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.067420 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.067574 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:44 crc kubenswrapper[4813]: E1202 10:08:44.067630 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:44 crc kubenswrapper[4813]: E1202 10:08:44.067753 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.108552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.108590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.108598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.108611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.108621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.211458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.211518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.211533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.211552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.211565 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.314510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.314581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.314598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.314627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.314645 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.418342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.418383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.418396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.418413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.418426 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.504972 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:44 crc kubenswrapper[4813]: E1202 10:08:44.505277 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:44 crc kubenswrapper[4813]: E1202 10:08:44.505410 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs podName:05bb9583-6b23-4207-b709-89dfe49fad73 nodeName:}" failed. No retries permitted until 2025-12-02 10:09:00.505383699 +0000 UTC m=+64.700558001 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs") pod "network-metrics-daemon-62bfc" (UID: "05bb9583-6b23-4207-b709-89dfe49fad73") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.521719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.521783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.521802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.521833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.521855 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.623951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.624008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.624024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.624045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.624062 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.727118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.727168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.727177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.727199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.727211 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.830388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.830449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.830463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.830487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.830504 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.933416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.933482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.933496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.933514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:44 crc kubenswrapper[4813]: I1202 10:08:44.933527 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:44Z","lastTransitionTime":"2025-12-02T10:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.037310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.037365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.037376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.037395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.037408 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.067167 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.067346 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.067196 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.067495 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.140160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.140219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.140233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.140253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.140269 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.243215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.243269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.243289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.243312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.243328 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.339509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.339570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.339583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.339599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.339610 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.353893 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:45Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.359459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.359527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.359541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.359567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.359581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.374579 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:45Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.379681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.379721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.379732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.379751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.379763 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.390858 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:45Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.396362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.396400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.396409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.396423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.396433 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.408702 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:45Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.412971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.413016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.413030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.413047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.413059 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.426480 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:45Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.426607 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.428856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.428889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.428898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.428916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.428925 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.532058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.532115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.532123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.532141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.532151 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.635615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.635681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.635697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.635724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.635741 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.738995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.739266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.739286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.739307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.739322 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.820817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.820880 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.820927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.820962 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821160 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821185 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821200 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821254 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:09:17.821237413 +0000 UTC m=+82.016411715 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821289 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821321 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821339 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821461 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:09:17.821442789 +0000 UTC m=+82.016617121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821486 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821504 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821619 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:09:17.821594784 +0000 UTC m=+82.016769096 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.821649 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:09:17.821639175 +0000 UTC m=+82.016813487 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.842543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.842604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.842617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.842641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.842654 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.921760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:08:45 crc kubenswrapper[4813]: E1202 10:08:45.921993 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:09:17.921957817 +0000 UTC m=+82.117132119 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.944936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.944991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.945003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.945021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:45 crc kubenswrapper[4813]: I1202 10:08:45.945037 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:45Z","lastTransitionTime":"2025-12-02T10:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.048593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.048673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.048690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.048748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.048767 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.067190 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.067284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:46 crc kubenswrapper[4813]: E1202 10:08:46.067465 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:46 crc kubenswrapper[4813]: E1202 10:08:46.067582 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.090577 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.111579 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.129096 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.150512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.150572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.150587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.150609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.150627 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.156931 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.177578 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.192545 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.211922 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.227606 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.243151 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.253373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.253414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.253424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.253441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.253453 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.259973 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.275864 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.290089 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.304868 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.320340 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.334997 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.351295 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:46Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.356255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.356321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.356334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.356355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.356371 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.459164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.459219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.459231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.459257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.459270 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.561686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.561761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.561776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.561797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.561809 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.664636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.664692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.664709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.664732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.664744 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.767723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.767803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.767828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.767861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.767886 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.871293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.871339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.871351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.871367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.871383 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.974455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.974506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.974522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.974539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:46 crc kubenswrapper[4813]: I1202 10:08:46.974554 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:46Z","lastTransitionTime":"2025-12-02T10:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.067104 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.067161 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:47 crc kubenswrapper[4813]: E1202 10:08:47.067260 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:47 crc kubenswrapper[4813]: E1202 10:08:47.067327 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.078020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.078102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.078136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.078154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.078168 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.181044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.181104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.181117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.181133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.181145 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.284992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.285047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.285060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.285082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.285113 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.388312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.388388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.388405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.388430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.388453 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.491269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.491332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.491342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.491357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.491387 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.595731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.595840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.595867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.595895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.595913 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.699011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.699283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.699300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.699316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.699328 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.802272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.802332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.802343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.802362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.802375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.905641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.905697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.905712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.905731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:47 crc kubenswrapper[4813]: I1202 10:08:47.905744 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:47Z","lastTransitionTime":"2025-12-02T10:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.008748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.009477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.009526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.009548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.009558 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.066894 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.066959 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:48 crc kubenswrapper[4813]: E1202 10:08:48.067162 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:48 crc kubenswrapper[4813]: E1202 10:08:48.067279 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.112390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.112473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.112490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.112514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.112528 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.215775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.215846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.215859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.215877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.215892 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.318344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.318408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.318417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.318431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.318444 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.421494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.421599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.421615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.421631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.421641 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.524866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.524926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.524939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.524963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.524977 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.628712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.628782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.628804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.628834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.628857 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.732143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.732269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.732313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.732346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.732384 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.835213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.835283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.835301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.835327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.835345 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.938912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.938967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.938979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.938999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:48 crc kubenswrapper[4813]: I1202 10:08:48.939017 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:48Z","lastTransitionTime":"2025-12-02T10:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.042535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.042591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.042622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.042641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.042656 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.067511 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.067776 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:49 crc kubenswrapper[4813]: E1202 10:08:49.067914 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:49 crc kubenswrapper[4813]: E1202 10:08:49.068044 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.144868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.144919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.144933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.144949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.144962 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.247775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.247844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.247854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.247871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.247882 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.350799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.350847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.350858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.350875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.350887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.453986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.454048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.454066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.454124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.454143 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.558261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.558623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.558759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.558872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.558972 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.662499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.662592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.662631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.662652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.662666 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.765436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.765501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.765513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.765532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.765544 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.868021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.868063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.868130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.868152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.868165 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.971305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.971379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.971401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.971429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:49 crc kubenswrapper[4813]: I1202 10:08:49.971452 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:49Z","lastTransitionTime":"2025-12-02T10:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.067646 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.067642 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:50 crc kubenswrapper[4813]: E1202 10:08:50.067840 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:50 crc kubenswrapper[4813]: E1202 10:08:50.067949 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.074127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.074178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.074190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.074208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.074219 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.176682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.176746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.176765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.176789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.176804 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.279706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.279782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.279794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.279812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.279827 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.382663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.382737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.382755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.382778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.382793 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.485314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.485369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.485382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.485402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.485416 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.588861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.588927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.588945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.588964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.588980 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.596526 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.611433 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.614941 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.630139 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.646988 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.661435 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.675552 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.691749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.691866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.691878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.691902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.691915 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.700303 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.713505 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.727638 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.748871 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.769485 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.781667 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.794884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.794936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.794947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.794965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.794976 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.797175 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.811418 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.824390 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.835332 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.855771 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:50Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.898766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.898826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.898843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.898870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:50 crc kubenswrapper[4813]: I1202 10:08:50.898889 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:50Z","lastTransitionTime":"2025-12-02T10:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.001971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.002061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.002121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.002153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.002179 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.066833 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.066917 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:51 crc kubenswrapper[4813]: E1202 10:08:51.067037 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:51 crc kubenswrapper[4813]: E1202 10:08:51.067488 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.105392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.105451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.105472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.105499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.105521 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.208456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.208512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.208523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.208544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.208561 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.311375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.311432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.311443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.311463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.311475 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.414539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.414599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.414611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.414628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.414638 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.517194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.517238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.517247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.517262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.517272 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.620444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.620540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.620564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.620596 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.620619 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.723425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.723474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.723486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.723502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.723515 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.826552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.826635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.826674 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.826705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.826731 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.930579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.930699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.930720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.930747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:51 crc kubenswrapper[4813]: I1202 10:08:51.930766 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:51Z","lastTransitionTime":"2025-12-02T10:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.034353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.034438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.034466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.034499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.034525 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.067030 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.067033 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:52 crc kubenswrapper[4813]: E1202 10:08:52.067213 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:52 crc kubenswrapper[4813]: E1202 10:08:52.067227 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.138247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.138300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.138314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.138331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.138343 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.241552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.241830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.241954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.242028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.242205 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.345754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.345799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.345820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.345848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.345857 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.448529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.448580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.448591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.448610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.448623 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.551228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.551277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.551289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.551305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.551315 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.654219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.654272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.654281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.654295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.654307 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.757348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.757404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.757416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.757431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.757441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.860492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.860566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.860589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.860621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.860643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.964109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.964161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.964170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.964186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:52 crc kubenswrapper[4813]: I1202 10:08:52.964197 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:52Z","lastTransitionTime":"2025-12-02T10:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.067424 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.067450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:53 crc kubenswrapper[4813]: E1202 10:08:53.067626 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.067647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.067677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.067689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.067708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.067721 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: E1202 10:08:53.067804 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.171741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.171803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.171821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.171845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.171861 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.274512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.274597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.274610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.274628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.274641 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.377509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.377574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.377583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.377601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.377612 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.480736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.480800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.480846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.480866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.480879 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.583535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.583622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.583648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.583683 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.583706 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.687254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.687326 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.687344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.687373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.687392 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.790939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.791006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.791025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.791056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.791116 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.894122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.894175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.894190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.894209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.894224 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.997727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.997787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.997803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.997825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:53 crc kubenswrapper[4813]: I1202 10:08:53.997841 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:53Z","lastTransitionTime":"2025-12-02T10:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.067303 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.067303 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:54 crc kubenswrapper[4813]: E1202 10:08:54.067875 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:54 crc kubenswrapper[4813]: E1202 10:08:54.068114 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.068478 4813 scope.go:117] "RemoveContainer" containerID="9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce" Dec 02 10:08:54 crc kubenswrapper[4813]: E1202 10:08:54.068772 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.101581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.101659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.101673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.101700 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.101715 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.204682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.204751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.204772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.204799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.204817 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.308907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.309015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.309039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.309106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.309134 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.412050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.412144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.412162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.412186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.412203 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.515373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.515428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.515441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.515461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.515473 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.619906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.619976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.620004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.620038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.620066 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.722849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.723258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.723271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.723291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.723303 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.827002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.827123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.827139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.827158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.827173 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.930785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.930829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.930866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.930882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:54 crc kubenswrapper[4813]: I1202 10:08:54.930892 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:54Z","lastTransitionTime":"2025-12-02T10:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.034228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.034312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.034331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.034360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.034381 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.066890 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.067036 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:55 crc kubenswrapper[4813]: E1202 10:08:55.067213 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:55 crc kubenswrapper[4813]: E1202 10:08:55.067309 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.138595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.138675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.138699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.138731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.138757 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.242362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.242409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.242419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.242433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.242445 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.345276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.345374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.345404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.345436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.345462 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.448775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.448887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.448911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.448942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.448967 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.551561 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.551619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.551632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.551650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.551662 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.632396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.632459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.632472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.632492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.632507 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: E1202 10:08:55.646453 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.650804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.650837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.650846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.650861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.650871 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: E1202 10:08:55.662915 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.667651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.667689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.667699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.667714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.667725 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: E1202 10:08:55.679849 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.684204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.684246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.684257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.684271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.684280 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: E1202 10:08:55.696441 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.699940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.699979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.699993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.700010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.700021 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: E1202 10:08:55.712403 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:55Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:55 crc kubenswrapper[4813]: E1202 10:08:55.712520 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.714725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.714767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.714779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.714796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.714811 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.817494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.817549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.817562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.817580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.817593 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.920615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.920654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.920665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.920681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:55 crc kubenswrapper[4813]: I1202 10:08:55.920694 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:55Z","lastTransitionTime":"2025-12-02T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.024298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.024377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.024393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.024413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.024427 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.067003 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.067228 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:56 crc kubenswrapper[4813]: E1202 10:08:56.067417 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:56 crc kubenswrapper[4813]: E1202 10:08:56.067618 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.094930 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.115777 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.127510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.127582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.127606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.127637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.127660 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.133116 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.149979 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.162448 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.174803 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.193148 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.205537 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.217383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.228271 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.230797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.230858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.230881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.230911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.230933 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.240345 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.251834 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.263402 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.277758 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.291912 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.303080 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.311626 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:08:56Z is after 2025-08-24T17:21:41Z" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.334431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.334489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.334502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.334521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.334553 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.439225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.439293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.439313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.439339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.439357 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.543339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.543404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.543422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.543447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.543464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.646570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.646646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.646680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.646699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.646710 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.750557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.750606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.750618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.750637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.750649 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.853685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.854345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.854516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.854658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.854789 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.958595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.959133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.959351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.959544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:56 crc kubenswrapper[4813]: I1202 10:08:56.959711 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:56Z","lastTransitionTime":"2025-12-02T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.063251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.063299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.063312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.063329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.063343 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.069333 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:57 crc kubenswrapper[4813]: E1202 10:08:57.069537 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.069791 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:57 crc kubenswrapper[4813]: E1202 10:08:57.070017 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.168121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.168497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.168707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.168897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.169114 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.272253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.272312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.272327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.272347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.272362 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.376159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.376436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.376468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.376501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.376573 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.480804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.480862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.480879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.480903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.480917 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.584404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.584471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.584487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.584514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.584532 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.687643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.687695 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.687705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.687723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.687735 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.790934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.791017 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.791036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.791062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.791107 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.894393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.894457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.894471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.894490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.894510 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.997308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.997379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.997391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.997436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:57 crc kubenswrapper[4813]: I1202 10:08:57.997451 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:57Z","lastTransitionTime":"2025-12-02T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.067774 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.067835 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:08:58 crc kubenswrapper[4813]: E1202 10:08:58.067990 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:08:58 crc kubenswrapper[4813]: E1202 10:08:58.068201 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.100911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.100969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.100978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.100994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.101015 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.203166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.203196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.203205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.203219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.203231 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.306919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.306969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.306978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.306995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.307007 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.409632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.409677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.409708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.409725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.409736 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.513628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.513678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.513687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.513702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.513712 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.617436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.617486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.617499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.617519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.617531 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.720110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.720162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.720172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.720186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.720199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.823515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.823579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.823598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.823623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.823641 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.927458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.927530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.927549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.927574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:58 crc kubenswrapper[4813]: I1202 10:08:58.927593 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:58Z","lastTransitionTime":"2025-12-02T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.030866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.030955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.030974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.031000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.031018 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.067361 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.067540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:08:59 crc kubenswrapper[4813]: E1202 10:08:59.067621 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:08:59 crc kubenswrapper[4813]: E1202 10:08:59.068030 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.081934 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.134586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.134710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.134742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.134775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.134794 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.237971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.238358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.238387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.238417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.238514 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.341599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.341648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.341658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.341673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.341682 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.445114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.445500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.445594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.445699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.445805 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.548926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.549013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.549036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.549068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.549166 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.652895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.652968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.652979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.652997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.653027 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.756136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.756208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.756222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.756263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.756279 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.859227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.859576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.859591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.859613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.859627 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.962139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.962202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.962218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.962242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:08:59 crc kubenswrapper[4813]: I1202 10:08:59.962260 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:08:59Z","lastTransitionTime":"2025-12-02T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.065001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.065049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.065061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.065101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.065115 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.067454 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.067532 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:00 crc kubenswrapper[4813]: E1202 10:09:00.067558 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:00 crc kubenswrapper[4813]: E1202 10:09:00.067680 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.167610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.167663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.167681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.167710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.167724 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.270807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.271190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.271283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.271376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.271464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.374460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.374509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.374522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.374540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.374555 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.477236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.477272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.477282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.477313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.477324 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.524284 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/0.log" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.524387 4813 generic.go:334] "Generic (PLEG): container finished" podID="30b516bc-ab92-49fb-8f3b-431cf0ef3164" containerID="c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec" exitCode=1 Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.524442 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x7cgx" event={"ID":"30b516bc-ab92-49fb-8f3b-431cf0ef3164","Type":"ContainerDied","Data":"c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.525057 4813 scope.go:117] "RemoveContainer" containerID="c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.542968 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.556791 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.572025 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.581196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.581267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.581280 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.581301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.581313 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.585378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:00 crc kubenswrapper[4813]: E1202 10:09:00.585546 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.585514 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: E1202 10:09:00.585633 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs podName:05bb9583-6b23-4207-b709-89dfe49fad73 nodeName:}" failed. No retries permitted until 2025-12-02 10:09:32.585613705 +0000 UTC m=+96.780788007 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs") pod "network-metrics-daemon-62bfc" (UID: "05bb9583-6b23-4207-b709-89dfe49fad73") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.602949 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"2025-12-02T10:08:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9\\\\n2025-12-02T10:08:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9 to /host/opt/cni/bin/\\\\n2025-12-02T10:08:15Z [verbose] multus-daemon started\\\\n2025-12-02T10:08:15Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:09:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.618305 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.631379 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.646989 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.661065 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.673921 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.683578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.683623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.683631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.683645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.683654 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.691115 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.703682 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df6ab83b-de8c-403d-b118-047d9b949e9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.721498 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.736185 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.749240 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.776564 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.786130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.786181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.786195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.786214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.786228 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.794021 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.810310 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:00Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.889916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.889971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.889981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.889999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.890009 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.992770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.992813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.992825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.992844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:00 crc kubenswrapper[4813]: I1202 10:09:00.992857 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:00Z","lastTransitionTime":"2025-12-02T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.067430 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.067438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:01 crc kubenswrapper[4813]: E1202 10:09:01.067585 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:01 crc kubenswrapper[4813]: E1202 10:09:01.067652 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.094991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.095039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.095047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.095103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.095119 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.197706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.197755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.197768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.197789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.197801 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.301442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.301489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.301499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.301515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.301526 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.404398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.404449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.404462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.404484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.404498 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.507154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.507211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.507224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.507242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.507258 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.530775 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/0.log" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.530839 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x7cgx" event={"ID":"30b516bc-ab92-49fb-8f3b-431cf0ef3164","Type":"ContainerStarted","Data":"b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.548959 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.562354 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.577207 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.589804 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df6ab83b-de8c-403d-b118-047d9b949e9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.610479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.610544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.610557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.610583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.610595 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.620134 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.647233 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.671231 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.693447 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.707287 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.713011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.713050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.713063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.713115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.713129 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.722918 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.735942 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.750452 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.766736 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"2025-12-02T10:08:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9\\\\n2025-12-02T10:08:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9 to /host/opt/cni/bin/\\\\n2025-12-02T10:08:15Z [verbose] multus-daemon started\\\\n2025-12-02T10:08:15Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:09:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.780021 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.792681 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.808122 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.815362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.815401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.815413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.815430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.815444 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.824134 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.835191 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:01Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.918304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.918377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.918400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.918424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:01 crc kubenswrapper[4813]: I1202 10:09:01.918439 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:01Z","lastTransitionTime":"2025-12-02T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.022456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.022526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.022542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.022563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.022581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.067772 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:02 crc kubenswrapper[4813]: E1202 10:09:02.067910 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.067772 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:02 crc kubenswrapper[4813]: E1202 10:09:02.068141 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.124778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.124834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.124850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.124871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.124887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.227870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.228597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.228631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.228653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.228664 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.332884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.333000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.333020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.333043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.333060 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.436468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.436510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.436522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.436537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.436549 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.539325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.539443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.539465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.539489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.539506 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.642856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.642919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.642931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.642948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.642960 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.746268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.746340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.746372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.746399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.746417 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.849421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.849530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.849542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.849561 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.849574 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.952430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.952491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.952503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.952521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:02 crc kubenswrapper[4813]: I1202 10:09:02.952532 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:02Z","lastTransitionTime":"2025-12-02T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.056055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.056124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.056136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.056155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.056169 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.067694 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.067761 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:03 crc kubenswrapper[4813]: E1202 10:09:03.067920 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:03 crc kubenswrapper[4813]: E1202 10:09:03.067971 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.159194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.159251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.159263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.159279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.159294 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.261812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.261886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.261903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.261931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.261949 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.364813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.364870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.364883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.364903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.364914 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.467940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.467999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.468011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.468027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.468038 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.571308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.571358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.571367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.571384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.571394 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.673338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.673402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.673416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.673432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.673444 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.776273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.776332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.776345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.776367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.776382 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.879170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.879218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.879228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.879243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.879256 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.981858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.981896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.981905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.981920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:03 crc kubenswrapper[4813]: I1202 10:09:03.981930 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:03Z","lastTransitionTime":"2025-12-02T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.067007 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.067102 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:04 crc kubenswrapper[4813]: E1202 10:09:04.067319 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:04 crc kubenswrapper[4813]: E1202 10:09:04.067456 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.084671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.084710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.084717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.084733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.084744 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.188289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.188360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.188373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.188392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.188403 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.291230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.291289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.291300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.291317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.291329 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.394457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.394546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.394580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.394618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.394646 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.497283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.497319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.497328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.497342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.497352 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.600555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.600627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.600640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.600716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.600735 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.703758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.703806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.703818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.703835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.703848 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.806788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.806856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.806869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.806907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.806920 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.910945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.911799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.911815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.911863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:04 crc kubenswrapper[4813]: I1202 10:09:04.911879 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:04Z","lastTransitionTime":"2025-12-02T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.015113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.015171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.015181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.015204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.015220 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.067752 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.067802 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:05 crc kubenswrapper[4813]: E1202 10:09:05.067901 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:05 crc kubenswrapper[4813]: E1202 10:09:05.068340 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.118546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.118608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.118622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.118643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.118655 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.221882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.221920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.221930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.221946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.221958 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.325739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.325825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.325848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.325874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.325892 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.429288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.429344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.429359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.429382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.429394 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.532570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.532630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.532642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.532661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.532674 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.636008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.636054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.636099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.636120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.636132 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.739152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.739198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.739209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.739226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.739236 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.842795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.842851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.842867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.842887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.842900 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.919735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.919808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.919833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.919866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.919888 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: E1202 10:09:05.936589 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.941350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.941428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.941444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.941507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.941525 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: E1202 10:09:05.955593 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.960145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.960192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.960205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.960230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.960243 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: E1202 10:09:05.975123 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.980614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.980670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.980681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.980704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:05 crc kubenswrapper[4813]: I1202 10:09:05.980732 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:05Z","lastTransitionTime":"2025-12-02T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:05 crc kubenswrapper[4813]: E1202 10:09:05.997796 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:05Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.002550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.002640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.002655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.002676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.002689 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: E1202 10:09:06.017437 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: E1202 10:09:06.017590 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.019759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.019806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.019826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.019855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.019875 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.067995 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.068176 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:06 crc kubenswrapper[4813]: E1202 10:09:06.068355 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:06 crc kubenswrapper[4813]: E1202 10:09:06.069092 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.086754 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.104015 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.117281 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.122423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.122476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.122493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.122514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.122533 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.131962 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.146251 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"2025-12-02T10:08:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9\\\\n2025-12-02T10:08:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9 to /host/opt/cni/bin/\\\\n2025-12-02T10:08:15Z [verbose] multus-daemon started\\\\n2025-12-02T10:08:15Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:09:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.159939 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.173846 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.189431 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.206143 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.219139 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.224929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.225295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.225386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.225470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.225541 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.234494 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.249558 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df6ab83b-de8c-403d-b118-047d9b949e9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.266841 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.281484 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.295353 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.316255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.328611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.328687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.328698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.328717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.328737 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.335667 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.349400 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:06Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.431903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.431964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.431977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.432004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.432016 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.535499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.535555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.535564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.535579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.535588 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.638760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.638816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.638828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.638849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.638864 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.741847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.741896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.741909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.741928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.741942 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.845094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.845149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.845195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.845215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.845230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.948295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.948349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.948362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.948382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:06 crc kubenswrapper[4813]: I1202 10:09:06.948396 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:06Z","lastTransitionTime":"2025-12-02T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.057097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.057166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.057178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.057198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.057208 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.067112 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.067112 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:07 crc kubenswrapper[4813]: E1202 10:09:07.067387 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:07 crc kubenswrapper[4813]: E1202 10:09:07.067629 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.161514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.161578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.161596 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.161622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.161640 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.265113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.265165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.265176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.265193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.265209 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.368478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.368525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.368539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.368557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.368568 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.472068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.472162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.472189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.472216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.472232 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.575377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.575428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.575441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.575461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.575474 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.678655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.678706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.678717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.678737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.678747 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.781327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.781375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.781386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.781404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.781415 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.885053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.885183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.885210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.885260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.885287 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.988644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.988703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.988722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.988743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:07 crc kubenswrapper[4813]: I1202 10:09:07.988758 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:07Z","lastTransitionTime":"2025-12-02T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.067481 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.067505 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:08 crc kubenswrapper[4813]: E1202 10:09:08.067675 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:08 crc kubenswrapper[4813]: E1202 10:09:08.067709 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.068992 4813 scope.go:117] "RemoveContainer" containerID="9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.091361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.091401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.091413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.091432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.091444 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.194794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.194926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.194939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.194960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.195151 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.297992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.298038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.298047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.298065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.298093 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.400889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.400981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.400994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.401014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.401033 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.504366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.504407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.504415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.504431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.504441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.556833 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/2.log" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.560905 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.561423 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.576517 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.595740 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.607577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.607648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.607660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.607679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.607691 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.608415 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.620613 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.635103 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.655599 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.671175 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.686791 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.701346 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.711364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.711424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.711436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.711454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.711472 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.718204 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df6ab83b-de8c-403d-b118-047d9b949e9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.735779 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.749170 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.764930 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"2025-12-02T10:08:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9\\\\n2025-12-02T10:08:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9 to /host/opt/cni/bin/\\\\n2025-12-02T10:08:15Z [verbose] multus-daemon started\\\\n2025-12-02T10:08:15Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:09:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.780823 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.800009 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.814522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.814565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.814578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.814598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.814610 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.816421 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.829415 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.850402 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:08Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.918302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.918357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.918370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.918389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:08 crc kubenswrapper[4813]: I1202 10:09:08.918404 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:08Z","lastTransitionTime":"2025-12-02T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.021811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.021864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.021875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.021893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.021905 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.067622 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.067683 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:09 crc kubenswrapper[4813]: E1202 10:09:09.067814 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:09 crc kubenswrapper[4813]: E1202 10:09:09.067953 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.125290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.125350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.125368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.125394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.125409 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.229049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.229133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.229146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.229166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.229180 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.332232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.332283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.332295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.332315 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.332328 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.435989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.436045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.436058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.436098 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.436121 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.539911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.539951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.539964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.539979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.539990 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.566438 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/3.log" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.566997 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/2.log" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.569428 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" exitCode=1 Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.569474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.569521 4813 scope.go:117] "RemoveContainer" containerID="9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.570385 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:09:09 crc kubenswrapper[4813]: E1202 10:09:09.570626 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.588267 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.604847 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.618790 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.642498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.642542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.642552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.642568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.642578 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.642677 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9406b98024bc2204bf806d3476d1986277ba8663f07fc330ce63935ac14538ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:08:39Z\\\",\\\"message\\\":\\\"e service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1202 10:08:39.054843 6459 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:09Z\\\",\\\"message\\\":\\\"gs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1202 10:09:09.230857 6817 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1202 10:09:09.230896 6817 services_controller.go:445] Built service openshift-marketplace/redhat-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI1202 10:09:09.230947 6817 services_controller.go:451] Built service openshift-marketplace/redhat-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.663916 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.679926 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.699239 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.713394 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df6ab83b-de8c-403d-b118-047d9b949e9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.735395 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.744943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.745011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.745029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.745057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.745103 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.748536 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.763454 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.778475 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.799162 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.816006 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.830853 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.848126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.848185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.848199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.848226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.848241 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.848910 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"2025-12-02T10:08:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9\\\\n2025-12-02T10:08:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9 to /host/opt/cni/bin/\\\\n2025-12-02T10:08:15Z [verbose] multus-daemon started\\\\n2025-12-02T10:08:15Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:09:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.862028 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.872647 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:09Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.951373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.951415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.951424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.951443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:09 crc kubenswrapper[4813]: I1202 10:09:09.951457 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:09Z","lastTransitionTime":"2025-12-02T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.054154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.054202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.054214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.054228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.054240 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.066881 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.067036 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:10 crc kubenswrapper[4813]: E1202 10:09:10.067161 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:10 crc kubenswrapper[4813]: E1202 10:09:10.067361 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.157506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.157539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.157548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.157564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.157575 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.260204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.260244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.260254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.260268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.260278 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.363572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.363637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.363650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.363670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.363682 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.466850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.466902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.466915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.466932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.466943 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.570156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.570226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.570242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.570264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.570281 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.575531 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/3.log" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.580940 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:09:10 crc kubenswrapper[4813]: E1202 10:09:10.581572 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.596687 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.609190 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df6ab83b-de8c-403d-b118-047d9b949e9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.628931 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.643486 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.659984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.672698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.672738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.672750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.672770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.672783 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.688379 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:09Z\\\",\\\"message\\\":\\\"gs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1202 10:09:09.230857 6817 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1202 10:09:09.230896 6817 services_controller.go:445] Built service openshift-marketplace/redhat-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI1202 10:09:09.230947 6817 services_controller.go:451] Built service openshift-marketplace/redhat-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:09:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.707607 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.721570 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.741333 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.756569 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.772133 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.778285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.778336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.778348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.778371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.778385 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.786537 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.799291 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"2025-12-02T10:08:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9\\\\n2025-12-02T10:08:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9 to /host/opt/cni/bin/\\\\n2025-12-02T10:08:15Z [verbose] multus-daemon started\\\\n2025-12-02T10:08:15Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:09:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.811275 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.822973 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.836539 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.846808 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.855556 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:10Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.880646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.880738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.880762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.880795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.880818 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.984614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.984676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.984687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.984705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:10 crc kubenswrapper[4813]: I1202 10:09:10.984718 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:10Z","lastTransitionTime":"2025-12-02T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.067267 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.067317 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:11 crc kubenswrapper[4813]: E1202 10:09:11.067548 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:11 crc kubenswrapper[4813]: E1202 10:09:11.067703 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.087949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.088023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.088052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.088121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.088150 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.190998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.191069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.191121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.191147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.191163 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.294179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.294259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.294275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.294309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.294330 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.397054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.397160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.397178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.397213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.397251 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.500282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.500356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.500371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.500389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.500402 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.608492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.608538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.608550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.608568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.608580 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.711205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.711269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.711287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.711313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.711332 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.814437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.814503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.814515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.814534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.814547 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.917626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.917682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.917698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.917719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:11 crc kubenswrapper[4813]: I1202 10:09:11.917733 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:11Z","lastTransitionTime":"2025-12-02T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.021226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.021283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.021293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.021310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.021321 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.066870 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.066904 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:12 crc kubenswrapper[4813]: E1202 10:09:12.067026 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:12 crc kubenswrapper[4813]: E1202 10:09:12.067235 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.123785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.123839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.123855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.123875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.123888 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.227158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.227225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.227244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.227270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.227289 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.329331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.329403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.329420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.329448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.329465 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.433375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.433838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.433853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.433878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.433892 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.537199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.537249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.537278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.537318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.537333 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.639447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.639497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.639510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.639527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.639545 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.742975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.743133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.743155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.743181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.743200 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.846334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.846411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.846429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.846459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.846477 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.949955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.950029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.950043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.950065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:12 crc kubenswrapper[4813]: I1202 10:09:12.950104 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:12Z","lastTransitionTime":"2025-12-02T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.053180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.053290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.053314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.053343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.053364 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.067474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.067489 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:13 crc kubenswrapper[4813]: E1202 10:09:13.067754 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:13 crc kubenswrapper[4813]: E1202 10:09:13.067866 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.156550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.156612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.156624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.156653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.156671 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.260209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.260276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.260291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.260307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.260319 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.363471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.363550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.363563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.363639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.363677 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.466781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.466841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.466852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.466872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.466883 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.570597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.570680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.570702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.570728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.570750 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.674045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.674153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.674173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.674201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.674222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.777525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.777586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.777603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.777629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.777646 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.881104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.881154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.881167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.881185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.881197 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.984521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.984576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.984592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.984612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:13 crc kubenswrapper[4813]: I1202 10:09:13.984625 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:13Z","lastTransitionTime":"2025-12-02T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.066911 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.067032 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:14 crc kubenswrapper[4813]: E1202 10:09:14.067252 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:14 crc kubenswrapper[4813]: E1202 10:09:14.067476 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.093761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.093826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.093841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.093862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.093878 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.196119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.196188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.196205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.196230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.196249 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.299866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.299965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.299992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.300025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.300049 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.403959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.404039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.404060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.404125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.404153 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.507575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.507621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.507633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.507651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.507662 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.610045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.610112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.610125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.610142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.610153 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.713312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.713679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.713761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.713853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.713933 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.816679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.817069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.817200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.817449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.817581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.921173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.921243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.921260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.921291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:14 crc kubenswrapper[4813]: I1202 10:09:14.921309 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:14Z","lastTransitionTime":"2025-12-02T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.024409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.024475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.024493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.024517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.024535 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.067466 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.067468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:15 crc kubenswrapper[4813]: E1202 10:09:15.067688 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:15 crc kubenswrapper[4813]: E1202 10:09:15.067814 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.127434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.127507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.127534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.127564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.127588 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.230815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.230866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.230877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.230895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.230905 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.334189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.334246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.334259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.334278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.334293 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.437009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.437066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.437103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.437125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.437140 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.540294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.540349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.540361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.540383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.540398 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.644063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.644144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.644156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.644176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.644197 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.747327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.747374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.747383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.747398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.747410 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.850976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.851025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.851036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.851052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.851067 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.954615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.954689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.954707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.954759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:15 crc kubenswrapper[4813]: I1202 10:09:15.954778 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:15Z","lastTransitionTime":"2025-12-02T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.058330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.058416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.058433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.058459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.058477 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.067150 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.067160 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:16 crc kubenswrapper[4813]: E1202 10:09:16.067435 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:16 crc kubenswrapper[4813]: E1202 10:09:16.067583 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.096623 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.115997 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.150431 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:09Z\\\",\\\"message\\\":\\\"gs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1202 10:09:09.230857 6817 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1202 10:09:09.230896 6817 services_controller.go:445] Built service openshift-marketplace/redhat-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI1202 10:09:09.230947 6817 services_controller.go:451] Built service openshift-marketplace/redhat-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:09:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.163930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.163987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.164002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.164023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.164039 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.180421 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.192351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.192406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.192420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.192447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.192464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.197388 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: E1202 10:09:16.211471 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.213422 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.216174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.216242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.216255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.216281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.216298 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.231495 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df6ab83b-de8c-403d-b118-047d9b949e9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: E1202 10:09:16.233976 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.239120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.239200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.239222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.239250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.239271 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.251167 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: E1202 10:09:16.255891 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.259907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.259963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.259975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.259998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.260009 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.268151 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: E1202 10:09:16.272531 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.276999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.277056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.277081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.277106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.277120 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.284381 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"2025-12-02T10:08:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9\\\\n2025-12-02T10:08:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9 to /host/opt/cni/bin/\\\\n2025-12-02T10:08:15Z [verbose] multus-daemon started\\\\n2025-12-02T10:08:15Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:09:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: E1202 10:09:16.290003 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: E1202 10:09:16.290218 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.293015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.293044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.293057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.293089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.293102 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.305960 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.329146 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.347108 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.362622 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.378795 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.393399 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.395773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.395976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.396054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.396171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.396187 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.408029 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.419567 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:16Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.499347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.499830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.499925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.500021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.500133 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.602735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.602815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.602828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.602847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.602859 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.706310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.706349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.706358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.706384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.706397 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.808954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.809004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.809017 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.809035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.809055 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.912263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.912329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.912339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.912358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:16 crc kubenswrapper[4813]: I1202 10:09:16.912371 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:16Z","lastTransitionTime":"2025-12-02T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.015152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.015235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.015245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.015282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.015293 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.067298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.067375 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.067475 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.067533 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.117860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.117928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.117947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.117969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.117983 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.221355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.221413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.221425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.221444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.221455 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.324014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.324065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.324101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.324121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.324133 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.427645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.427692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.427704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.427719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.427729 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.530797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.530841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.530853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.530868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.530879 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.633619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.633673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.633686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.633707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.633727 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.736783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.736862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.736884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.736905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.736922 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.839541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.839590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.839607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.839633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.839651 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.872792 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.873202 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.873346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.873452 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.872949 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.873679 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:21.873657434 +0000 UTC m=+146.068831746 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.873354 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.873850 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.873939 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.874044 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:21.874032715 +0000 UTC m=+146.069207017 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.873409 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.874281 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:21.874266932 +0000 UTC m=+146.069441234 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.873553 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.874452 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.874534 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.874630 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:21.874618882 +0000 UTC m=+146.069793184 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.942759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.942796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.942804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.942818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.942828 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:17Z","lastTransitionTime":"2025-12-02T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:17 crc kubenswrapper[4813]: I1202 10:09:17.974395 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:09:17 crc kubenswrapper[4813]: E1202 10:09:17.974531 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:21.974504293 +0000 UTC m=+146.169678605 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.045261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.045342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.045364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.045393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.045414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.067254 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.067318 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:18 crc kubenswrapper[4813]: E1202 10:09:18.067423 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:18 crc kubenswrapper[4813]: E1202 10:09:18.067542 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.148608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.148959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.149045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.149178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.149247 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.253004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.253044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.253056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.253088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.253105 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.356383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.356714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.356803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.356893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.356956 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.459036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.459096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.459108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.459130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.459149 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.562187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.562237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.562248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.562266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.562278 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.665565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.665633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.665651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.665673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.665687 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.770426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.770505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.770544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.770604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.770630 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.873709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.873767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.873784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.873808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.873829 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.977200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.977279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.977296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.977320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:18 crc kubenswrapper[4813]: I1202 10:09:18.977338 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:18Z","lastTransitionTime":"2025-12-02T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.067304 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.067309 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:19 crc kubenswrapper[4813]: E1202 10:09:19.067482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:19 crc kubenswrapper[4813]: E1202 10:09:19.067558 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.081231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.081318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.081339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.081366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.081385 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.185675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.185753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.185773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.185796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.185815 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.288928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.288998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.289016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.289039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.289059 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.391604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.391654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.391666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.391683 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.391696 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.494488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.494553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.494568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.494588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.494607 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.597686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.597739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.597751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.597771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.597785 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.700506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.700569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.700585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.700608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.700623 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.804218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.804275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.804287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.804307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.804320 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.908419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.908503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.908528 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.908558 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:19 crc kubenswrapper[4813]: I1202 10:09:19.908609 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:19Z","lastTransitionTime":"2025-12-02T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.011600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.011646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.011654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.011670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.011680 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.067434 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.067597 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:20 crc kubenswrapper[4813]: E1202 10:09:20.067635 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:20 crc kubenswrapper[4813]: E1202 10:09:20.067863 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.114655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.114712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.114722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.114738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.114748 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.218014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.218125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.218144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.218172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.218191 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.321282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.321340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.321354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.321372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.321407 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.424431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.424487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.424501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.424521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.424539 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.528171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.528494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.528504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.528520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.528532 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.631106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.631160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.631172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.631191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.631203 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.734489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.734539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.734547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.734563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.734578 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.837216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.837263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.837274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.837292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.837306 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.940751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.940801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.940814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.940831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:20 crc kubenswrapper[4813]: I1202 10:09:20.940853 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:20Z","lastTransitionTime":"2025-12-02T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.043469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.043515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.043524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.043538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.043550 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.067030 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.067150 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:21 crc kubenswrapper[4813]: E1202 10:09:21.067272 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:21 crc kubenswrapper[4813]: E1202 10:09:21.067432 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.147041 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.147136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.147152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.147170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.147184 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.250858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.250909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.250924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.250951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.250968 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.353629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.353671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.353684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.353703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.353715 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.457038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.457126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.457148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.457170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.457186 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.560426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.560488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.560500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.560518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.560528 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.664370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.664432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.664445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.664465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.664482 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.767829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.767899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.767913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.767932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.767949 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.871214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.871305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.871332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.871362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.871383 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.974764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.974807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.974816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.974831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:21 crc kubenswrapper[4813]: I1202 10:09:21.974841 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:21Z","lastTransitionTime":"2025-12-02T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.073264 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.073364 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:22 crc kubenswrapper[4813]: E1202 10:09:22.073683 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:22 crc kubenswrapper[4813]: E1202 10:09:22.073876 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.077095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.077136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.077146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.077161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.077173 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.180309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.180365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.180378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.180400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.180418 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.283533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.283607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.283625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.283651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.283671 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.386754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.386808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.386820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.386838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.386848 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.489917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.489990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.490005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.490025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.490036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.593025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.593057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.593066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.593095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.593105 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.695694 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.695725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.695732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.695747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.695755 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.798632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.798703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.798721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.798751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.798770 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.902302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.902357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.902371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.902393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:22 crc kubenswrapper[4813]: I1202 10:09:22.902407 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:22Z","lastTransitionTime":"2025-12-02T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.005128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.005183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.005196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.005215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.005228 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.067363 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.067380 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:23 crc kubenswrapper[4813]: E1202 10:09:23.067582 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:23 crc kubenswrapper[4813]: E1202 10:09:23.067765 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.107584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.107662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.107682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.107706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.107720 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.210414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.210460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.210472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.210487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.210498 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.314067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.314160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.314173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.314198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.314212 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.417780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.417839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.417852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.417871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.417885 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.521171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.521228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.521238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.521258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.521269 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.624140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.624191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.624209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.624229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.624242 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.727790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.727889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.727920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.727949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.727973 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.831661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.831733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.831761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.831793 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.831814 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.934825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.934880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.934893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.934912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:23 crc kubenswrapper[4813]: I1202 10:09:23.934926 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:23Z","lastTransitionTime":"2025-12-02T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.038324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.038400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.038421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.038446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.038464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.068038 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:24 crc kubenswrapper[4813]: E1202 10:09:24.068241 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.068405 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.068546 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:09:24 crc kubenswrapper[4813]: E1202 10:09:24.068576 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:24 crc kubenswrapper[4813]: E1202 10:09:24.068749 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.141689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.141748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.141761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.141779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.141794 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.245307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.245354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.245365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.245398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.245412 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.348371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.348418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.348427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.348446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.348465 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.451343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.451387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.451396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.451410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.451421 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.554116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.554182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.554198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.554222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.554244 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.656893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.656978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.656995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.657022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.657048 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.759845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.759912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.759929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.759955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.759974 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.863206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.863255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.863266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.863283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.863294 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.967116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.967191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.967213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.967240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:24 crc kubenswrapper[4813]: I1202 10:09:24.967258 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:24Z","lastTransitionTime":"2025-12-02T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.066830 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:25 crc kubenswrapper[4813]: E1202 10:09:25.067005 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.067216 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:25 crc kubenswrapper[4813]: E1202 10:09:25.067493 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.069893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.069945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.069964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.069985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.069999 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.088236 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.172355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.172411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.172423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.172440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.172458 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.276147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.276222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.276244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.277038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.277101 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.380569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.380621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.380632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.380653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.380668 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.483192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.483284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.483310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.483345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.483373 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.586693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.586756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.586770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.586789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.586803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.689763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.689808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.689852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.689873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.689886 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.792723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.792782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.792796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.792816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.792837 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.895712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.895769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.895778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.895796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.895808 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.999477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.999527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.999536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.999554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:25 crc kubenswrapper[4813]: I1202 10:09:25.999565 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:25Z","lastTransitionTime":"2025-12-02T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.067382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.067512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:26 crc kubenswrapper[4813]: E1202 10:09:26.067574 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:26 crc kubenswrapper[4813]: E1202 10:09:26.067754 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.085817 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d5d6f91f869a9932cedb8b90a0a8846296f4477a6e236ec5cb7ff750e0b4381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.104932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.104997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.105014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.105047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.105092 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.111131 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.140403 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3551771a-22ef-4f85-ad6b-fa4033a3f90f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:09Z\\\",\\\"message\\\":\\\"gs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1202 10:09:09.230857 6817 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1202 10:09:09.230896 6817 services_controller.go:445] Built service openshift-marketplace/redhat-operators LB template configs for network=default: []services.lbConfig(nil)\\\\nI1202 10:09:09.230947 6817 services_controller.go:451] Built service openshift-marketplace/redhat-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, T\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:09:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mllp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jj7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.164672 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13fee0e7-46f3-4e78-ac37-0764b073f270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1330450aa3e0a19384673246a0151c3328d9d5202124e80daa037b52666f693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b26457948602a26e4ce1affddda0f0ec627c2db5928df3dc271f28797ad4f79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97d433460f2d6ab6591663460574f830d94697c7a92daee4a8ca5b6d7cd49239\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d57203cdbea11f224c35a359724b62385157ea2b004e5f4d401c3949766bfa5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab2de29e0520b5401faf94a5fbf73a418953384ded10142b33f5c6da31cc1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83d9d52173189c10173794e65c88d2341160f0ad664c18cc9c7e5a57f321e348\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5931db2084451ce1e1fae613e8a0f3e4edd0e0a0f5795307e57b569dae848aef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxq2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4ggp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.179830 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fbbe6fd-3820-474c-af83-dc3efb10dea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0043f295ada7982edd314c0ae2a6b43f0d795dfbc3d2dc9da117d152b6ec2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0748caa0d1e74dc84f5b2304bbf79d28ab2222ca128579c4e329e9f1ff3a413a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8d2tn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7fjff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.193939 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07e88a8c-0f55-41fb-9e10-5e7a70a324c5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11b93cdcd218efcc51e4f1e874664d448978ce0c100ee4ee55e18abbd1c0795f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0c913a417c8624bfdad7fbf92d5d8426c23f129c2ccf9cf730d8e7b252b2b36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddb8f5ee8a172bc2031230431acd9046610a519a0bcdbb65dbd1a88a64cb21e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f0d5429cb2f5bca4067038a02451c8e269ff574124349447638f05612c3be2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.205218 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df6ab83b-de8c-403d-b118-047d9b949e9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c72b531326a0c91002896376299875373aee01be2a4275adbf169f30216355fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be35d49ab6a6b5bc5045534329b54f5933566f5d3cae41ab7dceb6d9ca467803\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.208845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.208904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.208924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.208948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.208964 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.221472 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2a43cba-eadf-448d-9f26-f8a245a3d76d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1202 10:08:08.500590 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 10:08:08.503425 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1720613278/tls.crt::/tmp/serving-cert-1720613278/tls.key\\\\\\\"\\\\nI1202 10:08:14.097722 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1202 10:08:14.106380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1202 10:08:14.106601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1202 10:08:14.106676 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1202 10:08:14.107922 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1202 10:08:14.121174 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1202 10:08:14.121211 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1202 10:08:14.121397 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121429 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1202 10:08:14.121458 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1202 10:08:14.121486 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1202 10:08:14.121511 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1202 10:08:14.121536 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1202 10:08:14.125529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.235800 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db121737-190f-4b43-9d79-e96e2dd76080\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a631a3d02d83c46020f0963396062aac17d0cc56a27d87ed9215f7336e07eaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8285r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4p89g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.251716 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-x7cgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30b516bc-ab92-49fb-8f3b-431cf0ef3164\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T10:09:00Z\\\",\\\"message\\\":\\\"2025-12-02T10:08:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9\\\\n2025-12-02T10:08:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_835e41ee-42ab-49ca-ac7d-62bf34daf1f9 to /host/opt/cni/bin/\\\\n2025-12-02T10:08:15Z [verbose] multus-daemon started\\\\n2025-12-02T10:08:15Z [verbose] Readiness Indicator file check\\\\n2025-12-02T10:09:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q6vrs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-x7cgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.268208 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6704e3-d7c7-4f1f-89e6-fbe74bf20501\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0254fe7fd90887657b476adc405fa7ae61e1904e9381e4d2a189c8a0b42c9114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f312fe7960766d3692c0ef29308444790006d110eeaaed6e30f0e2bfc2da0520\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf06b9247749614fed100dd54aa494dc9377aed03a1910372d78c22c4185f267\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.284316 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.299869 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a5d9196865315c13716c6fdecf03a54bb60e54f892f159a3ac34cfa5f5b8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ff17baa6d103d04064e8fbd9bfa0d02c31a0109abd6b440b3dc22d8e294da85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.311941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.312012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.312028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.312049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.312092 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.322901 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f68bb9a-77a9-4cb3-bb2d-d0564ca2e69c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c477580e13917b8042147823751c8883b6a4405aa944e3b6994fc2bd1935658c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9bb863bc54c186860f87789a3da45432da2e9fe69d89f111f288b46a72567b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b33710d1c57d153914f69e4dd2ad48be6768ec971cf3364740d331cbd0c934f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f3f5152821d03fc1787718207b3314baa0e23cb28b7bcc01c2c047e4e03be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff032df4aad87bc199b2f53df409dbaf3fa8a7fa1f48d0f8dd314f8420d0292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2734586bf848146bffba63504a7006d5e48e6bc1d4fc0e12bb1c29cfeb511590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2734586bf848146bffba63504a7006d5e48e6bc1d4fc0e12bb1c29cfeb511590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7dfa7e462e1d143d02e7b02a9148c3ddac655871de812a6d5ae1d720879ff95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7dfa7e462e1d143d02e7b02a9148c3ddac655871de812a6d5ae1d720879ff95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://294e1f9c38936e2d36d31e40633ded519ddd6228487c596eb7f24779c1867bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294e1f9c38936e2d36d31e40633ded519ddd6228487c596eb7f24779c1867bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T10:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T10:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:07:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.341207 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fbb40e6-955d-4ba1-b48f-e535ed20494d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ff00803af22cc08c5c68bb3fd269301efc6818c0cb146230671f82ba66c772f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rb2g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.357377 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-62bfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05bb9583-6b23-4207-b709-89dfe49fad73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwzbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-62bfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.378502 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f28cd6f33fb300fd2cad0107da7fbe91de9ea0bd293660b0bf3d40e8fe1bdedd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.396255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.415507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.415600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.415627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.415665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.415686 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.415678 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8f9dg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77042011-320e-4ef3-839b-013ae0e97908\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T10:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://563ffa014cd39c57bb185e9f7c81fa589969b34c32a9cce042e93cf6fb157fba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T10:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wjjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T10:08:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8f9dg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.523109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.523172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.523183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.523202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.523216 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.626286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.626325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.626339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.626360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.626374 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.675613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.675693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.675717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.675746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.675769 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: E1202 10:09:26.702024 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.707935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.707986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.708002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.708036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.708053 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: E1202 10:09:26.724460 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.728729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.728770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.728787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.728809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.728826 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: E1202 10:09:26.745374 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.750243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.750393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.750478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.750583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.750656 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: E1202 10:09:26.766702 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.771625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.771697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.771712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.771735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.771752 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: E1202 10:09:26.788562 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"634e706a-26e4-4e25-9891-c6df4b41c61e\\\",\\\"systemUUID\\\":\\\"fbb40b6c-9f6a-4fae-a398-84ef5378393c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T10:09:26Z is after 2025-08-24T17:21:41Z" Dec 02 10:09:26 crc kubenswrapper[4813]: E1202 10:09:26.788694 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.791361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.791972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.791996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.792022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.792035 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.895873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.895916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.895926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.895941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.895956 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.998197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.998231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.998239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.998253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:26 crc kubenswrapper[4813]: I1202 10:09:26.998263 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:26Z","lastTransitionTime":"2025-12-02T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.067315 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.067353 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:27 crc kubenswrapper[4813]: E1202 10:09:27.067466 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:27 crc kubenswrapper[4813]: E1202 10:09:27.067612 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.101524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.101572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.101581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.101597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.101608 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.204478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.204541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.204549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.204567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.204578 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.306888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.306934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.306946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.306961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.306972 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.410771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.410839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.410856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.410881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.410907 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.514813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.514880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.514895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.514915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.514932 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.617887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.617924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.617932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.617946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.617955 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.721421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.721523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.721549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.721584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.721610 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.824987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.825105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.825142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.825169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.825184 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.928191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.928285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.928304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.928333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:27 crc kubenswrapper[4813]: I1202 10:09:27.928354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:27Z","lastTransitionTime":"2025-12-02T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.032625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.032684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.032698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.032719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.032734 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.067261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:28 crc kubenswrapper[4813]: E1202 10:09:28.067497 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.067618 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:28 crc kubenswrapper[4813]: E1202 10:09:28.067790 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.135118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.135178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.135189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.135208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.135221 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.238790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.238840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.238848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.238864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.238878 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.341914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.341980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.341992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.342011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.342025 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.445123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.445170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.445182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.445201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.445215 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.547814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.547920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.547936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.547982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.547996 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.650894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.651042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.651057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.651098 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.651113 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.754023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.754087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.754099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.754117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.754130 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.857635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.857682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.857692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.857710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.857722 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.960129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.960168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.960180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.960195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:28 crc kubenswrapper[4813]: I1202 10:09:28.960206 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:28Z","lastTransitionTime":"2025-12-02T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.062735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.062780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.062815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.062830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.062840 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.067213 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.067312 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:29 crc kubenswrapper[4813]: E1202 10:09:29.067332 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:29 crc kubenswrapper[4813]: E1202 10:09:29.067488 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.165318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.165390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.165407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.165426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.165437 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.269036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.269122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.269134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.269155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.269169 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.372474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.372654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.372673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.372691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.372702 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.476290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.476337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.476345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.476362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.476371 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.579403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.579463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.579480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.579500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.579512 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.683052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.683164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.683181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.683213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.683230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.786951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.787019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.787036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.787064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.787099 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.890705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.890780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.890798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.890823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.890841 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.994056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.994129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.994147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.994172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:29 crc kubenswrapper[4813]: I1202 10:09:29.994188 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:29Z","lastTransitionTime":"2025-12-02T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.067960 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.067974 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:30 crc kubenswrapper[4813]: E1202 10:09:30.068236 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:30 crc kubenswrapper[4813]: E1202 10:09:30.068354 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.097355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.097412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.097423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.097479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.097496 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.200097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.200142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.200153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.200169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.200180 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.304379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.304458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.304471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.304491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.304526 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.407427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.407514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.407540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.407571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.407592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.510368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.510430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.510455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.510485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.510508 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.613336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.613392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.613408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.613477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.613497 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.716579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.716639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.716657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.716682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.716699 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.819384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.819437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.819451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.819469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.819483 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.922924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.922967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.922978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.922996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:30 crc kubenswrapper[4813]: I1202 10:09:30.923008 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:30Z","lastTransitionTime":"2025-12-02T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.025577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.025632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.025644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.025662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.025674 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.067091 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.067150 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:31 crc kubenswrapper[4813]: E1202 10:09:31.067289 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:31 crc kubenswrapper[4813]: E1202 10:09:31.067368 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.128862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.128919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.128931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.128950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.128964 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.231771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.231819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.231830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.231849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.231862 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.335242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.335312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.335335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.335366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.335387 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.438283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.438363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.438387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.438418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.438445 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.541379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.541497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.541510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.541527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.541565 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.644192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.644254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.644270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.644294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.644312 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.747205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.747597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.747607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.747624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.747634 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.850445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.850495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.850507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.850527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.850541 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.954207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.954247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.954257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.954275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:31 crc kubenswrapper[4813]: I1202 10:09:31.954286 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:31Z","lastTransitionTime":"2025-12-02T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.056806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.056839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.056849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.056863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.056872 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.066874 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:32 crc kubenswrapper[4813]: E1202 10:09:32.066987 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.067027 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:32 crc kubenswrapper[4813]: E1202 10:09:32.067491 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.159691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.159750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.159766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.159786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.159798 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.263259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.263309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.263319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.263338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.263349 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.365704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.365768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.365782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.365805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.365819 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.468593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.468644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.468660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.468681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.468697 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.571888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.571965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.571987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.572017 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.572034 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.644128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:32 crc kubenswrapper[4813]: E1202 10:09:32.644367 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:09:32 crc kubenswrapper[4813]: E1202 10:09:32.644466 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs podName:05bb9583-6b23-4207-b709-89dfe49fad73 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:36.644439452 +0000 UTC m=+160.839613794 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs") pod "network-metrics-daemon-62bfc" (UID: "05bb9583-6b23-4207-b709-89dfe49fad73") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.676190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.676243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.676260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.676285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.676305 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.780174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.780251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.780269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.780296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.780314 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.883977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.884068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.884130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.884165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.884188 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.986884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.986944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.986956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.986975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:32 crc kubenswrapper[4813]: I1202 10:09:32.986987 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:32Z","lastTransitionTime":"2025-12-02T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.067042 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.067150 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:33 crc kubenswrapper[4813]: E1202 10:09:33.067243 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:33 crc kubenswrapper[4813]: E1202 10:09:33.067326 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.090218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.090288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.090306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.090332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.090352 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.193606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.193646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.193656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.193670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.193679 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.296330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.296371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.296383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.296398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.296409 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.398613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.398655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.398665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.398681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.398693 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.501774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.501838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.501859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.501883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.501900 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.604970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.605021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.605039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.605057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.605085 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.707912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.707965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.707994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.708011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.708022 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.810729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.810774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.810785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.810803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.810815 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.913634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.913712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.913723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.913741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:33 crc kubenswrapper[4813]: I1202 10:09:33.913754 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:33Z","lastTransitionTime":"2025-12-02T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.016018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.016124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.016145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.016178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.016216 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.067044 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.067166 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:34 crc kubenswrapper[4813]: E1202 10:09:34.067284 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:34 crc kubenswrapper[4813]: E1202 10:09:34.067363 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.119507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.119601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.119645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.119667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.119681 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.222953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.223018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.223029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.223052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.223064 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.326437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.326496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.326511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.326534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.326550 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.430119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.430186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.430209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.430238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.430254 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.533429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.533489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.533506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.533529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.533548 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.637030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.637106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.637119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.637140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.637153 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.739121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.739162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.739173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.739188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.739199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.842448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.842506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.842527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.842549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.842563 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.944897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.944955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.944966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.944983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:34 crc kubenswrapper[4813]: I1202 10:09:34.945000 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:34Z","lastTransitionTime":"2025-12-02T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.048144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.048206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.048220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.048237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.048250 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.067345 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.067420 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:35 crc kubenswrapper[4813]: E1202 10:09:35.067521 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:35 crc kubenswrapper[4813]: E1202 10:09:35.067637 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.150777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.150816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.150826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.150840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.150850 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.253507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.253565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.253581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.253606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.253625 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.356886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.356956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.356975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.357001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.357021 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.460151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.460197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.460213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.460232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.460248 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.563488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.563532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.563543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.563560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.563573 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.665673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.665724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.665739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.665757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.665769 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.768679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.768800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.768812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.768829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.768846 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.871664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.871707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.871719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.871741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.871752 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.974122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.974176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.974192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.974215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:35 crc kubenswrapper[4813]: I1202 10:09:35.974232 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:35Z","lastTransitionTime":"2025-12-02T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.067365 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.067794 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:36 crc kubenswrapper[4813]: E1202 10:09:36.068048 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:36 crc kubenswrapper[4813]: E1202 10:09:36.068202 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.080299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.080351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.080360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.080378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.080391 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.116121 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.11606022 podStartE2EDuration="11.11606022s" podCreationTimestamp="2025-12-02 10:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.115423111 +0000 UTC m=+100.310597413" watchObservedRunningTime="2025-12-02 10:09:36.11606022 +0000 UTC m=+100.311234562" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.126592 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-97mdk" podStartSLOduration=83.126570108 podStartE2EDuration="1m23.126570108s" podCreationTimestamp="2025-12-02 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.126505896 +0000 UTC m=+100.321680198" watchObservedRunningTime="2025-12-02 10:09:36.126570108 +0000 UTC m=+100.321744400" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.182996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.183035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.183062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.183096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.183107 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.187047 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8f9dg" podStartSLOduration=82.187025872 podStartE2EDuration="1m22.187025872s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.186771244 +0000 UTC m=+100.381945566" watchObservedRunningTime="2025-12-02 10:09:36.187025872 +0000 UTC m=+100.382200174" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.198945 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.198916851 podStartE2EDuration="46.198916851s" podCreationTimestamp="2025-12-02 10:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.198793507 +0000 UTC m=+100.393967809" watchObservedRunningTime="2025-12-02 10:09:36.198916851 +0000 UTC m=+100.394091153" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.223000 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.222978527 podStartE2EDuration="37.222978527s" podCreationTimestamp="2025-12-02 10:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.209210333 +0000 UTC m=+100.404384635" watchObservedRunningTime="2025-12-02 10:09:36.222978527 +0000 UTC m=+100.418152829" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.234750 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.23473147199999 podStartE2EDuration="1m22.234731472s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.223615735 +0000 UTC m=+100.418790037" watchObservedRunningTime="2025-12-02 10:09:36.234731472 +0000 UTC m=+100.429905774" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.285254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.285290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.285301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.285314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.285325 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.322520 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-x4ggp" podStartSLOduration=82.322496067 podStartE2EDuration="1m22.322496067s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.322187578 +0000 UTC m=+100.517361900" watchObservedRunningTime="2025-12-02 10:09:36.322496067 +0000 UTC m=+100.517670369" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.350399 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7fjff" podStartSLOduration=81.350371545 podStartE2EDuration="1m21.350371545s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.334377085 +0000 UTC m=+100.529551387" watchObservedRunningTime="2025-12-02 10:09:36.350371545 +0000 UTC m=+100.545545847" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.351235 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.35122908 podStartE2EDuration="1m23.35122908s" podCreationTimestamp="2025-12-02 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.350352414 +0000 UTC m=+100.545526716" watchObservedRunningTime="2025-12-02 10:09:36.35122908 +0000 UTC m=+100.546403382" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.387636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.387671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.387681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.387696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.387708 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.391132 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podStartSLOduration=83.39110872 podStartE2EDuration="1m23.39110872s" podCreationTimestamp="2025-12-02 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.390818091 +0000 UTC m=+100.585992393" watchObservedRunningTime="2025-12-02 10:09:36.39110872 +0000 UTC m=+100.586283032" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.407759 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-x7cgx" podStartSLOduration=82.407732117 podStartE2EDuration="1m22.407732117s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:36.4068034 +0000 UTC m=+100.601977732" watchObservedRunningTime="2025-12-02 10:09:36.407732117 +0000 UTC m=+100.602906419" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.490108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.490157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.490169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.490188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.490200 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.593274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.593352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.593370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.593394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.593413 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.695844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.695892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.695907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.695938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.695957 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.800111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.800188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.800212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.800248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.800278 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.848100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.848144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.848157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.848176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.848187 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T10:09:36Z","lastTransitionTime":"2025-12-02T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.895281 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg"] Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.895877 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.898097 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.899494 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.900625 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.900753 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.990901 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f94eac6-61db-48b5-83e6-95e24b5b4455-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.991011 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f94eac6-61db-48b5-83e6-95e24b5b4455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.991090 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f94eac6-61db-48b5-83e6-95e24b5b4455-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.991124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f94eac6-61db-48b5-83e6-95e24b5b4455-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:36 crc kubenswrapper[4813]: I1202 10:09:36.991159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f94eac6-61db-48b5-83e6-95e24b5b4455-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.067434 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.067486 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:37 crc kubenswrapper[4813]: E1202 10:09:37.067611 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:37 crc kubenswrapper[4813]: E1202 10:09:37.067744 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.092280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f94eac6-61db-48b5-83e6-95e24b5b4455-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.092339 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f94eac6-61db-48b5-83e6-95e24b5b4455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.092382 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f94eac6-61db-48b5-83e6-95e24b5b4455-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.092399 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f94eac6-61db-48b5-83e6-95e24b5b4455-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.092635 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f94eac6-61db-48b5-83e6-95e24b5b4455-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.092707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f94eac6-61db-48b5-83e6-95e24b5b4455-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.092790 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f94eac6-61db-48b5-83e6-95e24b5b4455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.093564 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f94eac6-61db-48b5-83e6-95e24b5b4455-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.100454 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f94eac6-61db-48b5-83e6-95e24b5b4455-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.121240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f94eac6-61db-48b5-83e6-95e24b5b4455-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6t6hg\" (UID: \"4f94eac6-61db-48b5-83e6-95e24b5b4455\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.212009 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.674426 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" event={"ID":"4f94eac6-61db-48b5-83e6-95e24b5b4455","Type":"ContainerStarted","Data":"b38d83c03a89de42d57860be6f61f4cc6242396c1aafb17292a1ed5ab143310b"} Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.674512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" event={"ID":"4f94eac6-61db-48b5-83e6-95e24b5b4455","Type":"ContainerStarted","Data":"e12f4504912b05baf6f770c56686522c751aa7314191be338cab14d081e6de25"} Dec 02 10:09:37 crc kubenswrapper[4813]: I1202 10:09:37.694253 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6t6hg" podStartSLOduration=83.694227243 podStartE2EDuration="1m23.694227243s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:37.693472641 +0000 UTC m=+101.888646973" watchObservedRunningTime="2025-12-02 10:09:37.694227243 +0000 UTC m=+101.889401565" Dec 02 10:09:38 crc kubenswrapper[4813]: I1202 10:09:38.066993 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:38 crc kubenswrapper[4813]: I1202 10:09:38.067252 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:38 crc kubenswrapper[4813]: E1202 10:09:38.067529 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:38 crc kubenswrapper[4813]: E1202 10:09:38.067626 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:38 crc kubenswrapper[4813]: I1202 10:09:38.067678 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:09:38 crc kubenswrapper[4813]: E1202 10:09:38.067909 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jj7j_openshift-ovn-kubernetes(3551771a-22ef-4f85-ad6b-fa4033a3f90f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" Dec 02 10:09:39 crc kubenswrapper[4813]: I1202 10:09:39.067387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:39 crc kubenswrapper[4813]: E1202 10:09:39.068505 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:39 crc kubenswrapper[4813]: I1202 10:09:39.067387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:39 crc kubenswrapper[4813]: E1202 10:09:39.069121 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:40 crc kubenswrapper[4813]: I1202 10:09:40.067361 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:40 crc kubenswrapper[4813]: I1202 10:09:40.067451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:40 crc kubenswrapper[4813]: E1202 10:09:40.067570 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:40 crc kubenswrapper[4813]: E1202 10:09:40.067635 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:41 crc kubenswrapper[4813]: I1202 10:09:41.067841 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:41 crc kubenswrapper[4813]: I1202 10:09:41.067894 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:41 crc kubenswrapper[4813]: E1202 10:09:41.068037 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:41 crc kubenswrapper[4813]: E1202 10:09:41.068349 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:42 crc kubenswrapper[4813]: I1202 10:09:42.066971 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:42 crc kubenswrapper[4813]: E1202 10:09:42.067177 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:42 crc kubenswrapper[4813]: I1202 10:09:42.067404 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:42 crc kubenswrapper[4813]: E1202 10:09:42.067526 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:43 crc kubenswrapper[4813]: I1202 10:09:43.067844 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:43 crc kubenswrapper[4813]: E1202 10:09:43.068012 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:43 crc kubenswrapper[4813]: I1202 10:09:43.067855 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:43 crc kubenswrapper[4813]: E1202 10:09:43.068349 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:44 crc kubenswrapper[4813]: I1202 10:09:44.067548 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:44 crc kubenswrapper[4813]: I1202 10:09:44.067622 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:44 crc kubenswrapper[4813]: E1202 10:09:44.067728 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:44 crc kubenswrapper[4813]: E1202 10:09:44.067856 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:45 crc kubenswrapper[4813]: I1202 10:09:45.067118 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:45 crc kubenswrapper[4813]: I1202 10:09:45.067118 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:45 crc kubenswrapper[4813]: E1202 10:09:45.067329 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:45 crc kubenswrapper[4813]: E1202 10:09:45.067407 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:46 crc kubenswrapper[4813]: I1202 10:09:46.067980 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:46 crc kubenswrapper[4813]: I1202 10:09:46.068100 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:46 crc kubenswrapper[4813]: E1202 10:09:46.069888 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:46 crc kubenswrapper[4813]: E1202 10:09:46.070047 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:46 crc kubenswrapper[4813]: I1202 10:09:46.707040 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/1.log" Dec 02 10:09:46 crc kubenswrapper[4813]: I1202 10:09:46.708158 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/0.log" Dec 02 10:09:46 crc kubenswrapper[4813]: I1202 10:09:46.708218 4813 generic.go:334] "Generic (PLEG): container finished" podID="30b516bc-ab92-49fb-8f3b-431cf0ef3164" containerID="b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e" exitCode=1 Dec 02 10:09:46 crc kubenswrapper[4813]: I1202 10:09:46.708256 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x7cgx" event={"ID":"30b516bc-ab92-49fb-8f3b-431cf0ef3164","Type":"ContainerDied","Data":"b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e"} Dec 02 10:09:46 crc kubenswrapper[4813]: I1202 10:09:46.708294 4813 scope.go:117] "RemoveContainer" containerID="c955edbd35e3a5e302e5310366704efff08777d56883ffea63b6c93e73d959ec" Dec 02 10:09:46 crc kubenswrapper[4813]: I1202 10:09:46.708779 4813 scope.go:117] "RemoveContainer" containerID="b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e" Dec 02 10:09:46 crc kubenswrapper[4813]: E1202 10:09:46.708940 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-x7cgx_openshift-multus(30b516bc-ab92-49fb-8f3b-431cf0ef3164)\"" pod="openshift-multus/multus-x7cgx" podUID="30b516bc-ab92-49fb-8f3b-431cf0ef3164" Dec 02 10:09:47 crc kubenswrapper[4813]: I1202 10:09:47.067820 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:47 crc kubenswrapper[4813]: I1202 10:09:47.067866 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:47 crc kubenswrapper[4813]: E1202 10:09:47.068415 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:47 crc kubenswrapper[4813]: E1202 10:09:47.068531 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:47 crc kubenswrapper[4813]: I1202 10:09:47.712334 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/1.log" Dec 02 10:09:48 crc kubenswrapper[4813]: I1202 10:09:48.067260 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:48 crc kubenswrapper[4813]: I1202 10:09:48.067263 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:48 crc kubenswrapper[4813]: E1202 10:09:48.067428 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:48 crc kubenswrapper[4813]: E1202 10:09:48.067533 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:49 crc kubenswrapper[4813]: I1202 10:09:49.067375 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:49 crc kubenswrapper[4813]: I1202 10:09:49.067407 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:49 crc kubenswrapper[4813]: E1202 10:09:49.067512 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:49 crc kubenswrapper[4813]: E1202 10:09:49.067684 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.067575 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.067734 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:50 crc kubenswrapper[4813]: E1202 10:09:50.067786 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:50 crc kubenswrapper[4813]: E1202 10:09:50.068582 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.069156 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.729736 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/3.log" Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.733399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerStarted","Data":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.733947 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.765605 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podStartSLOduration=96.765573523 podStartE2EDuration="1m36.765573523s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:09:50.765283885 +0000 UTC m=+114.960458217" watchObservedRunningTime="2025-12-02 10:09:50.765573523 +0000 UTC m=+114.960747875" Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.802782 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-62bfc"] Dec 02 10:09:50 crc kubenswrapper[4813]: I1202 10:09:50.802942 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:50 crc kubenswrapper[4813]: E1202 10:09:50.803044 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:51 crc kubenswrapper[4813]: I1202 10:09:51.067829 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:51 crc kubenswrapper[4813]: E1202 10:09:51.068008 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:52 crc kubenswrapper[4813]: I1202 10:09:52.067003 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:52 crc kubenswrapper[4813]: I1202 10:09:52.067046 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:52 crc kubenswrapper[4813]: E1202 10:09:52.067217 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:52 crc kubenswrapper[4813]: E1202 10:09:52.067432 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:53 crc kubenswrapper[4813]: I1202 10:09:53.067671 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:53 crc kubenswrapper[4813]: I1202 10:09:53.067785 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:53 crc kubenswrapper[4813]: E1202 10:09:53.068049 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:53 crc kubenswrapper[4813]: E1202 10:09:53.068212 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:54 crc kubenswrapper[4813]: I1202 10:09:54.068007 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:54 crc kubenswrapper[4813]: E1202 10:09:54.068279 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:54 crc kubenswrapper[4813]: I1202 10:09:54.068439 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:54 crc kubenswrapper[4813]: E1202 10:09:54.068682 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:55 crc kubenswrapper[4813]: I1202 10:09:55.067828 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:55 crc kubenswrapper[4813]: I1202 10:09:55.067866 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:55 crc kubenswrapper[4813]: E1202 10:09:55.068358 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:55 crc kubenswrapper[4813]: E1202 10:09:55.068555 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:56 crc kubenswrapper[4813]: I1202 10:09:56.067858 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:56 crc kubenswrapper[4813]: I1202 10:09:56.068915 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:56 crc kubenswrapper[4813]: E1202 10:09:56.069066 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:56 crc kubenswrapper[4813]: E1202 10:09:56.069621 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:56 crc kubenswrapper[4813]: E1202 10:09:56.104286 4813 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 02 10:09:56 crc kubenswrapper[4813]: E1202 10:09:56.160381 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 10:09:57 crc kubenswrapper[4813]: I1202 10:09:57.067807 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:57 crc kubenswrapper[4813]: E1202 10:09:57.067954 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:09:57 crc kubenswrapper[4813]: I1202 10:09:57.067829 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:57 crc kubenswrapper[4813]: E1202 10:09:57.068198 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:58 crc kubenswrapper[4813]: I1202 10:09:58.066962 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:09:58 crc kubenswrapper[4813]: I1202 10:09:58.067135 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:09:58 crc kubenswrapper[4813]: E1202 10:09:58.067299 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:09:58 crc kubenswrapper[4813]: E1202 10:09:58.067497 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:09:59 crc kubenswrapper[4813]: I1202 10:09:59.067582 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:09:59 crc kubenswrapper[4813]: E1202 10:09:59.067789 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:09:59 crc kubenswrapper[4813]: I1202 10:09:59.067600 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:09:59 crc kubenswrapper[4813]: E1202 10:09:59.067928 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:10:00 crc kubenswrapper[4813]: I1202 10:10:00.067359 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:10:00 crc kubenswrapper[4813]: I1202 10:10:00.067465 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:00 crc kubenswrapper[4813]: E1202 10:10:00.067562 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:10:00 crc kubenswrapper[4813]: E1202 10:10:00.067655 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:10:01 crc kubenswrapper[4813]: I1202 10:10:01.067632 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:01 crc kubenswrapper[4813]: I1202 10:10:01.067669 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:10:01 crc kubenswrapper[4813]: E1202 10:10:01.067840 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:10:01 crc kubenswrapper[4813]: E1202 10:10:01.068288 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:10:01 crc kubenswrapper[4813]: I1202 10:10:01.068410 4813 scope.go:117] "RemoveContainer" containerID="b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e" Dec 02 10:10:01 crc kubenswrapper[4813]: E1202 10:10:01.162103 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 10:10:01 crc kubenswrapper[4813]: I1202 10:10:01.775400 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/1.log" Dec 02 10:10:01 crc kubenswrapper[4813]: I1202 10:10:01.775749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x7cgx" event={"ID":"30b516bc-ab92-49fb-8f3b-431cf0ef3164","Type":"ContainerStarted","Data":"fa667f6b53370318e088cf15bfd020ef148487c511e3b82ae02d62cdb5a23253"} Dec 02 10:10:02 crc kubenswrapper[4813]: I1202 10:10:02.066755 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:10:02 crc kubenswrapper[4813]: I1202 10:10:02.066763 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:02 crc kubenswrapper[4813]: E1202 10:10:02.066966 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:10:02 crc kubenswrapper[4813]: E1202 10:10:02.066878 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:10:03 crc kubenswrapper[4813]: I1202 10:10:03.067229 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:03 crc kubenswrapper[4813]: I1202 10:10:03.067282 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:10:03 crc kubenswrapper[4813]: E1202 10:10:03.067408 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:10:03 crc kubenswrapper[4813]: E1202 10:10:03.067670 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:10:04 crc kubenswrapper[4813]: I1202 10:10:04.066929 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:04 crc kubenswrapper[4813]: I1202 10:10:04.067005 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:10:04 crc kubenswrapper[4813]: E1202 10:10:04.067114 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:10:04 crc kubenswrapper[4813]: E1202 10:10:04.067147 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:10:05 crc kubenswrapper[4813]: I1202 10:10:05.067738 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:05 crc kubenswrapper[4813]: I1202 10:10:05.067808 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:10:05 crc kubenswrapper[4813]: E1202 10:10:05.067946 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 10:10:05 crc kubenswrapper[4813]: E1202 10:10:05.068056 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-62bfc" podUID="05bb9583-6b23-4207-b709-89dfe49fad73" Dec 02 10:10:06 crc kubenswrapper[4813]: I1202 10:10:06.067789 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:10:06 crc kubenswrapper[4813]: I1202 10:10:06.067868 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:06 crc kubenswrapper[4813]: E1202 10:10:06.069791 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 10:10:06 crc kubenswrapper[4813]: E1202 10:10:06.069999 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.067284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.067378 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.069967 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.070168 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.070178 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.070289 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.466704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.497673 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4wtmn"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.498258 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.500557 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfw69"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.500979 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.501507 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-knbz8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.502220 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.508329 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.515645 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.515915 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.515928 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.524474 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.524547 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.524593 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.524768 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.524845 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.524858 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.524965 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.524997 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.525019 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.526167 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.526452 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.527020 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.531794 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.531963 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.532025 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.532043 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.532324 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8hbqc"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.533022 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.533156 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8hbqc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.533272 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.534684 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.535901 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8dtjd"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.536164 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.536362 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7nd9n"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.536833 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.536917 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.537057 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g8r9r"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.537542 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d067220e-9800-4c06-b0e2-01d1be8b8986-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539778 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-etcd-client\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-image-import-ca\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539860 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl5fs\" (UniqueName: \"kubernetes.io/projected/17a5f145-950f-4585-a991-6bbe400f41d3-kube-api-access-gl5fs\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-audit\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-config\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539942 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-etcd-serving-ca\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-encryption-config\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.539982 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d067220e-9800-4c06-b0e2-01d1be8b8986-images\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr5lf\" (UniqueName: \"kubernetes.io/projected/d067220e-9800-4c06-b0e2-01d1be8b8986-kube-api-access-gr5lf\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-client-ca\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8ab6ba98-ca13-4026-b2b6-340906a28b6c-audit-dir\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d067220e-9800-4c06-b0e2-01d1be8b8986-config\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540137 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17a5f145-950f-4585-a991-6bbe400f41d3-serving-cert\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-serving-cert\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzcgk\" (UniqueName: \"kubernetes.io/projected/8ab6ba98-ca13-4026-b2b6-340906a28b6c-kube-api-access-mzcgk\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540202 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ab6ba98-ca13-4026-b2b6-340906a28b6c-node-pullsecrets\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.540241 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-config\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.544159 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.544644 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.544860 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.545242 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.545560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.545777 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.549157 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.549700 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mbprt"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.549955 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zkbcp"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.550262 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.550338 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.550419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.553211 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.553839 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.617619 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.618460 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.641710 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-config\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.641778 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94edb821-0d5a-478e-9582-0e931d97b222-auth-proxy-config\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.641813 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/048e92fd-979c-460a-a018-cabbb1357848-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.641840 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmpj8\" (UniqueName: \"kubernetes.io/projected/048e92fd-979c-460a-a018-cabbb1357848-kube-api-access-cmpj8\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.641874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d067220e-9800-4c06-b0e2-01d1be8b8986-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.641900 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b285edf6-2c62-4357-9caa-c77feb57ff2d-serving-cert\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.641926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b285edf6-2c62-4357-9caa-c77feb57ff2d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.641949 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9t9h\" (UniqueName: \"kubernetes.io/projected/b285edf6-2c62-4357-9caa-c77feb57ff2d-kube-api-access-s9t9h\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.643572 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-config\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.643665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.643699 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-etcd-client\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.643725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-image-import-ca\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.643765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl5fs\" (UniqueName: \"kubernetes.io/projected/17a5f145-950f-4585-a991-6bbe400f41d3-kube-api-access-gl5fs\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.643880 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7zt\" (UniqueName: \"kubernetes.io/projected/94edb821-0d5a-478e-9582-0e931d97b222-kube-api-access-kx7zt\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.644962 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-image-import-ca\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646405 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94edb821-0d5a-478e-9582-0e931d97b222-config\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-audit\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646573 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-etcd-serving-ca\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-encryption-config\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646634 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d067220e-9800-4c06-b0e2-01d1be8b8986-images\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646682 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-config\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zzss2\" (UID: \"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646758 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr5lf\" (UniqueName: \"kubernetes.io/projected/d067220e-9800-4c06-b0e2-01d1be8b8986-kube-api-access-gr5lf\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646789 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/048e92fd-979c-460a-a018-cabbb1357848-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646821 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/94edb821-0d5a-478e-9582-0e931d97b222-machine-approver-tls\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646852 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-client-ca\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646900 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8ab6ba98-ca13-4026-b2b6-340906a28b6c-audit-dir\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17a5f145-950f-4585-a991-6bbe400f41d3-serving-cert\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.646978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-serving-cert\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d067220e-9800-4c06-b0e2-01d1be8b8986-config\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzcgk\" (UniqueName: \"kubernetes.io/projected/8ab6ba98-ca13-4026-b2b6-340906a28b6c-kube-api-access-mzcgk\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647108 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54l6x\" (UniqueName: \"kubernetes.io/projected/3ca46876-8b39-440e-a82f-b6eb424cca00-kube-api-access-54l6x\") pod \"downloads-7954f5f757-8hbqc\" (UID: \"3ca46876-8b39-440e-a82f-b6eb424cca00\") " pod="openshift-console/downloads-7954f5f757-8hbqc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647141 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647169 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvzm\" (UniqueName: \"kubernetes.io/projected/e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7-kube-api-access-rfvzm\") pod \"cluster-samples-operator-665b6dd947-zzss2\" (UID: \"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647169 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-etcd-client\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647202 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ab6ba98-ca13-4026-b2b6-340906a28b6c-node-pullsecrets\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-etcd-serving-ca\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.647942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-audit\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.648107 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-client-ca\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.648680 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d067220e-9800-4c06-b0e2-01d1be8b8986-config\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.648714 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8ab6ba98-ca13-4026-b2b6-340906a28b6c-audit-dir\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.649483 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-config\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.651117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d067220e-9800-4c06-b0e2-01d1be8b8986-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.651959 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17a5f145-950f-4585-a991-6bbe400f41d3-serving-cert\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.652116 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ab6ba98-ca13-4026-b2b6-340906a28b6c-node-pullsecrets\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.652348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-encryption-config\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.652516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d067220e-9800-4c06-b0e2-01d1be8b8986-images\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.654638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab6ba98-ca13-4026-b2b6-340906a28b6c-serving-cert\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.663875 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.665817 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.666024 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.666460 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.666689 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.666786 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.666888 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.666970 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.667740 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.667899 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.668324 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.668513 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.668668 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.668852 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.669065 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.669311 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.669471 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.669799 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.669918 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.670032 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.670159 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.671157 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.671349 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.671698 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.671857 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.672022 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.698152 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.699008 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.700033 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.700300 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.700822 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.701973 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.702109 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.703735 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.719128 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.719747 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.721782 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.722260 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.722534 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2bgmp"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.722928 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.723221 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.723221 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.724303 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.725241 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.725962 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzcgk\" (UniqueName: \"kubernetes.io/projected/8ab6ba98-ca13-4026-b2b6-340906a28b6c-kube-api-access-mzcgk\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.726337 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.727659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr5lf\" (UniqueName: \"kubernetes.io/projected/d067220e-9800-4c06-b0e2-01d1be8b8986-kube-api-access-gr5lf\") pod \"machine-api-operator-5694c8668f-4wtmn\" (UID: \"d067220e-9800-4c06-b0e2-01d1be8b8986\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.727738 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tlm8g"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.728355 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.728688 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.729065 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.729391 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.729632 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.731028 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.731663 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.732023 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.732521 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.732616 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.732700 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.733225 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.734342 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.734602 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jxfgc"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.734857 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735131 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735162 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735257 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735268 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735271 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735346 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735347 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735368 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735425 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735477 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735493 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735585 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735625 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735694 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735754 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735793 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735852 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735902 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.735932 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.736001 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.736012 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.736110 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.736192 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.736267 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.736527 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.736682 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.736779 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.737207 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfw69"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.737291 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.737697 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.740416 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.741172 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl5fs\" (UniqueName: \"kubernetes.io/projected/17a5f145-950f-4585-a991-6bbe400f41d3-kube-api-access-gl5fs\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.744443 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.744620 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.749947 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.750905 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.752256 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.753440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bfw69\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.753993 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.754678 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.755477 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.756872 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ab6ba98-ca13-4026-b2b6-340906a28b6c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-knbz8\" (UID: \"8ab6ba98-ca13-4026-b2b6-340906a28b6c\") " pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.759433 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.759630 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.759892 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.763630 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tt449"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.768127 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.768674 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776292 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-etcd-ca\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94edb821-0d5a-478e-9582-0e931d97b222-auth-proxy-config\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776362 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/048e92fd-979c-460a-a018-cabbb1357848-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776395 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmpj8\" (UniqueName: \"kubernetes.io/projected/048e92fd-979c-460a-a018-cabbb1357848-kube-api-access-cmpj8\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-service-ca\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776455 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-trusted-ca-bundle\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-dir\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776464 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776508 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b285edf6-2c62-4357-9caa-c77feb57ff2d-serving-cert\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776534 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9dbt\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-kube-api-access-g9dbt\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776557 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04f208cb-9296-4a48-8f2c-d5589dad97a1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776590 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4979739-3dc4-4820-b52d-ad093d216bd7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776621 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b285edf6-2c62-4357-9caa-c77feb57ff2d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776653 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a917dd4e-95f4-4b15-93f3-d7555f527969-config\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776678 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a917dd4e-95f4-4b15-93f3-d7555f527969-trusted-ca\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776708 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9t9h\" (UniqueName: \"kubernetes.io/projected/b285edf6-2c62-4357-9caa-c77feb57ff2d-kube-api-access-s9t9h\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/98c41744-524c-47d4-b78a-71f53480faba-etcd-client\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776761 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbr72\" (UniqueName: \"kubernetes.io/projected/98c41744-524c-47d4-b78a-71f53480faba-kube-api-access-bbr72\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776895 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-tls\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.776923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4979739-3dc4-4820-b52d-ad093d216bd7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.778413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.778508 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98c41744-524c-47d4-b78a-71f53480faba-serving-cert\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.778550 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7zt\" (UniqueName: \"kubernetes.io/projected/94edb821-0d5a-478e-9582-0e931d97b222-kube-api-access-kx7zt\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.778816 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrcx\" (UniqueName: \"kubernetes.io/projected/b4979739-3dc4-4820-b52d-ad093d216bd7-kube-api-access-ztrcx\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.778867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f208cb-9296-4a48-8f2c-d5589dad97a1-config\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.778921 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.778959 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.778998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94edb821-0d5a-478e-9582-0e931d97b222-config\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.779033 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f208cb-9296-4a48-8f2c-d5589dad97a1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.779070 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.779148 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-serving-cert\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.779192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.779267 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.779297 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-config\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.781133 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.781733 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m"] Dec 02 10:10:07 crc kubenswrapper[4813]: E1202 10:10:07.782209 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.282183156 +0000 UTC m=+132.477357458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.783416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-oauth-serving-cert\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.783543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-trusted-ca\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.784450 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.784668 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b285edf6-2c62-4357-9caa-c77feb57ff2d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.785864 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94edb821-0d5a-478e-9582-0e931d97b222-auth-proxy-config\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.786208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.786315 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.786501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf2q6\" (UniqueName: \"kubernetes.io/projected/a917dd4e-95f4-4b15-93f3-d7555f527969-kube-api-access-bf2q6\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.786528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.786631 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zzss2\" (UID: \"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.786706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.786893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-bound-sa-token\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.786993 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.787185 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/94edb821-0d5a-478e-9582-0e931d97b222-machine-approver-tls\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.787543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/048e92fd-979c-460a-a018-cabbb1357848-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.787642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12650400-55e2-4496-a52e-eae7bd0434e9-metrics-tls\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.787676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-etcd-service-ca\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.787733 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.787742 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b285edf6-2c62-4357-9caa-c77feb57ff2d-serving-cert\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788029 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12650400-55e2-4496-a52e-eae7bd0434e9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/048e92fd-979c-460a-a018-cabbb1357848-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788261 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2vnw\" (UniqueName: \"kubernetes.io/projected/12650400-55e2-4496-a52e-eae7bd0434e9-kube-api-access-h2vnw\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-oauth-config\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bx9\" (UniqueName: \"kubernetes.io/projected/c5909f8e-1a62-455a-a85a-73d85747e3a7-kube-api-access-x7bx9\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788806 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4979739-3dc4-4820-b52d-ad093d216bd7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788835 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54l6x\" (UniqueName: \"kubernetes.io/projected/3ca46876-8b39-440e-a82f-b6eb424cca00-kube-api-access-54l6x\") pod \"downloads-7954f5f757-8hbqc\" (UID: \"3ca46876-8b39-440e-a82f-b6eb424cca00\") " pod="openshift-console/downloads-7954f5f757-8hbqc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788868 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-certificates\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvzm\" (UniqueName: \"kubernetes.io/projected/e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7-kube-api-access-rfvzm\") pod \"cluster-samples-operator-665b6dd947-zzss2\" (UID: \"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-config\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788954 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a917dd4e-95f4-4b15-93f3-d7555f527969-serving-cert\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.788990 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-policies\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.789009 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12650400-55e2-4496-a52e-eae7bd0434e9-trusted-ca\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.789046 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbg7v\" (UniqueName: \"kubernetes.io/projected/e967798d-a0d2-40e4-af66-ba0d04ac8318-kube-api-access-vbg7v\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.789778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/94edb821-0d5a-478e-9582-0e931d97b222-machine-approver-tls\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.790138 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.790329 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.791564 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zzss2\" (UID: \"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.794053 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4wtmn"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.794116 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.794433 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.794729 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94edb821-0d5a-478e-9582-0e931d97b222-config\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.794770 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/048e92fd-979c-460a-a018-cabbb1357848-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.795821 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.796964 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.799346 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.799479 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.799607 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-knbz8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.799636 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8hbqc"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.799654 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.799909 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.800013 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.800176 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.800310 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.800472 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.801205 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.801258 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-22vj7"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.801371 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.802762 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.804805 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.806257 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f2zd8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.807417 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.807573 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g8r9r"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.809691 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mbprt"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.810715 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.811785 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8dtjd"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.812938 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.813417 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.813918 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.814929 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.815972 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.816943 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7nd9n"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.817994 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.819205 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zkbcp"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.820269 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.820717 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.821567 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.822490 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.823337 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jxfgc"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.824793 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-q4k5s"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.825761 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.825895 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.826271 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.827292 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.828459 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.829458 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tlm8g"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.830446 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f2zd8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.831486 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.832450 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9ztk4"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.833588 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.833658 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.833815 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lzjn8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.839603 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.842748 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.844849 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.848153 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.849811 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kmcbt"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.850598 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.850724 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kmcbt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.851972 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.852336 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.852986 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.854017 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.855053 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.856116 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tt449"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.857630 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9ztk4"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.858282 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kmcbt"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.859778 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-22vj7"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.860446 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lzjn8"] Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.861005 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.880622 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.889670 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890341 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-client-ca\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890395 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bx9\" (UniqueName: \"kubernetes.io/projected/c5909f8e-1a62-455a-a85a-73d85747e3a7-kube-api-access-x7bx9\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890426 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59p9v\" (UniqueName: \"kubernetes.io/projected/75ac4f31-f970-4342-ac67-8e1354f183e2-kube-api-access-59p9v\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890453 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a4ac4-4417-42fd-8165-1be41729d64f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890480 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7qj\" (UniqueName: \"kubernetes.io/projected/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-kube-api-access-2f7qj\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:07 crc kubenswrapper[4813]: E1202 10:10:07.890528 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.390499074 +0000 UTC m=+132.585673376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890579 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjrs\" (UniqueName: \"kubernetes.io/projected/581444db-9870-4a61-a384-c3a96bff71de-kube-api-access-dtjrs\") pod \"migrator-59844c95c7-7cqw2\" (UID: \"581444db-9870-4a61-a384-c3a96bff71de\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890626 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890646 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-encryption-config\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12650400-55e2-4496-a52e-eae7bd0434e9-trusted-ca\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbg7v\" (UniqueName: \"kubernetes.io/projected/e967798d-a0d2-40e4-af66-ba0d04ac8318-kube-api-access-vbg7v\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bcfafe-52bc-44c0-813a-fede5ecfdc41-serving-cert\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890778 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80775159-a100-48e2-a896-ff8c5121cd39-serving-cert\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890835 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b72ec18d-92ae-4544-957a-036f8e948b1c-proxy-tls\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.890976 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-etcd-ca\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891047 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891088 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec41424c-e403-485d-aa92-32c0c41e7238-config-volume\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891132 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-images\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-key\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-service-ca\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891250 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9dbt\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-kube-api-access-g9dbt\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04f208cb-9296-4a48-8f2c-d5589dad97a1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891308 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a917dd4e-95f4-4b15-93f3-d7555f527969-trusted-ca\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891332 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-serving-cert\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891356 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhx4\" (UniqueName: \"kubernetes.io/projected/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-kube-api-access-wqhx4\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891475 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6478\" (UniqueName: \"kubernetes.io/projected/3b4f0840-cc07-4b0a-a96b-5534312b0553-kube-api-access-n6478\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1426bd-91d8-43d6-8d72-5316200e13c7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891541 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-webhook-cert\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891568 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4979739-3dc4-4820-b52d-ad093d216bd7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891592 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-audit-policies\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891618 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891901 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892048 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12650400-55e2-4496-a52e-eae7bd0434e9-trusted-ca\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892058 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98c41744-524c-47d4-b78a-71f53480faba-serving-cert\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-service-ca\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892171 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-service-ca-bundle\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892242 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892272 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpzxp\" (UniqueName: \"kubernetes.io/projected/bd2a4ac4-4417-42fd-8165-1be41729d64f-kube-api-access-qpzxp\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892307 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-metrics-certs\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892355 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2a4ac4-4417-42fd-8165-1be41729d64f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892380 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwhg\" (UniqueName: \"kubernetes.io/projected/fc3b41fb-9bd1-4f00-9653-8e73a695de87-kube-api-access-ggwhg\") pod \"package-server-manager-789f6589d5-rbpqf\" (UID: \"fc3b41fb-9bd1-4f00-9653-8e73a695de87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892409 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-oauth-serving-cert\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-config\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892521 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b1426bd-91d8-43d6-8d72-5316200e13c7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892573 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892616 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-tmpfs\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892722 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2ht\" (UniqueName: \"kubernetes.io/projected/01bcfafe-52bc-44c0-813a-fede5ecfdc41-kube-api-access-mh2ht\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892801 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892842 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892893 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-etcd-service-ca\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892899 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a917dd4e-95f4-4b15-93f3-d7555f527969-trusted-ca\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75ac4f31-f970-4342-ac67-8e1354f183e2-audit-dir\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892960 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-srv-cert\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.892993 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b4f0840-cc07-4b0a-a96b-5534312b0553-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893086 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12650400-55e2-4496-a52e-eae7bd0434e9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893121 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2vnw\" (UniqueName: \"kubernetes.io/projected/12650400-55e2-4496-a52e-eae7bd0434e9-kube-api-access-h2vnw\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893157 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d932b1-7a66-4020-b200-fb2ae977f7bf-service-ca-bundle\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893210 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bqzb\" (UniqueName: \"kubernetes.io/projected/8f750ee8-dda0-4af2-a692-412153a3f80e-kube-api-access-2bqzb\") pod \"multus-admission-controller-857f4d67dd-jxfgc\" (UID: \"8f750ee8-dda0-4af2-a692-412153a3f80e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-oauth-serving-cert\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-oauth-config\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893292 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkjm\" (UniqueName: \"kubernetes.io/projected/27186c4d-b911-4ef8-8e86-082ddf35d6b7-kube-api-access-9tkjm\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4979739-3dc4-4820-b52d-ad093d216bd7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893348 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-default-certificate\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-certificates\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-config\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893442 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a917dd4e-95f4-4b15-93f3-d7555f527969-serving-cert\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893468 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-policies\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdzbg\" (UniqueName: \"kubernetes.io/projected/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-kube-api-access-hdzbg\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.891670 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-etcd-ca\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893522 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-serving-cert\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8f750ee8-dda0-4af2-a692-412153a3f80e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jxfgc\" (UID: \"8f750ee8-dda0-4af2-a692-412153a3f80e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893819 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893841 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmplb\" (UniqueName: \"kubernetes.io/projected/03ddc93f-c104-482e-a615-1f6ce52c62b8-kube-api-access-fmplb\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.893868 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf7lb\" (UniqueName: \"kubernetes.io/projected/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-kube-api-access-lf7lb\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894432 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894598 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-config\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894721 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-policies\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894807 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bcfafe-52bc-44c0-813a-fede5ecfdc41-config\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-trusted-ca-bundle\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-dir\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b4f0840-cc07-4b0a-a96b-5534312b0553-proxy-tls\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894957 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s6jg\" (UniqueName: \"kubernetes.io/projected/f3d932b1-7a66-4020-b200-fb2ae977f7bf-kube-api-access-2s6jg\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.894924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-config\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895015 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-dir\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4979739-3dc4-4820-b52d-ad093d216bd7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895129 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-cabundle\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895325 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-certificates\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a917dd4e-95f4-4b15-93f3-d7555f527969-config\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895485 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec41424c-e403-485d-aa92-32c0c41e7238-secret-volume\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895530 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbr72\" (UniqueName: \"kubernetes.io/projected/98c41744-524c-47d4-b78a-71f53480faba-kube-api-access-bbr72\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895582 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/98c41744-524c-47d4-b78a-71f53480faba-etcd-client\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895610 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcls4\" (UniqueName: \"kubernetes.io/projected/ec41424c-e403-485d-aa92-32c0c41e7238-kube-api-access-hcls4\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895646 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/98c41744-524c-47d4-b78a-71f53480faba-etcd-service-ca\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895674 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895700 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc3b41fb-9bd1-4f00-9653-8e73a695de87-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rbpqf\" (UID: \"fc3b41fb-9bd1-4f00-9653-8e73a695de87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bzl\" (UniqueName: \"kubernetes.io/projected/b72ec18d-92ae-4544-957a-036f8e948b1c-kube-api-access-t2bzl\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-config\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.895832 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-tls\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: E1202 10:10:07.896068 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.396050797 +0000 UTC m=+132.591225099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.896366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a917dd4e-95f4-4b15-93f3-d7555f527969-config\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.896429 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrcx\" (UniqueName: \"kubernetes.io/projected/b4979739-3dc4-4820-b52d-ad093d216bd7-kube-api-access-ztrcx\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.896453 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f208cb-9296-4a48-8f2c-d5589dad97a1-config\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.896705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.896865 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.896906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f208cb-9296-4a48-8f2c-d5589dad97a1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.897022 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-profile-collector-cert\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.897154 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-serving-cert\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.897250 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f208cb-9296-4a48-8f2c-d5589dad97a1-config\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.897354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.897663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-trusted-ca\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.897737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1426bd-91d8-43d6-8d72-5316200e13c7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.898200 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.898577 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.898588 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-oauth-config\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.898700 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.898771 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-stats-auth\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.898968 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf2q6\" (UniqueName: \"kubernetes.io/projected/a917dd4e-95f4-4b15-93f3-d7555f527969-kube-api-access-bf2q6\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899017 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtvg\" (UniqueName: \"kubernetes.io/projected/80775159-a100-48e2-a896-ff8c5121cd39-kube-api-access-nvtvg\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-etcd-client\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899177 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-bound-sa-token\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899216 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12650400-55e2-4496-a52e-eae7bd0434e9-metrics-tls\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899310 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-config\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899382 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4979739-3dc4-4820-b52d-ad093d216bd7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.899938 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-trusted-ca-bundle\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.900197 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.900302 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.900988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-trusted-ca\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.902292 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.902344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98c41744-524c-47d4-b78a-71f53480faba-serving-cert\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.902504 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/98c41744-524c-47d4-b78a-71f53480faba-etcd-client\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.902920 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.903022 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4979739-3dc4-4820-b52d-ad093d216bd7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.904835 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a917dd4e-95f4-4b15-93f3-d7555f527969-serving-cert\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.905136 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/12650400-55e2-4496-a52e-eae7bd0434e9-metrics-tls\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.905316 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.905759 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f208cb-9296-4a48-8f2c-d5589dad97a1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.906276 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.906663 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.907598 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.908846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.911386 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-tls\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.911411 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-serving-cert\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.914370 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.921326 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.941721 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.961848 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 10:10:07 crc kubenswrapper[4813]: I1202 10:10:07.980805 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.000827 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.001540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.001745 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc3b41fb-9bd1-4f00-9653-8e73a695de87-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rbpqf\" (UID: \"fc3b41fb-9bd1-4f00-9653-8e73a695de87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.001787 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2bzl\" (UniqueName: \"kubernetes.io/projected/b72ec18d-92ae-4544-957a-036f8e948b1c-kube-api-access-t2bzl\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.001817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcls4\" (UniqueName: \"kubernetes.io/projected/ec41424c-e403-485d-aa92-32c0c41e7238-kube-api-access-hcls4\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.001855 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-config\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.001927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-profile-collector-cert\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.001975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1426bd-91d8-43d6-8d72-5316200e13c7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002019 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-stats-auth\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvtvg\" (UniqueName: \"kubernetes.io/projected/80775159-a100-48e2-a896-ff8c5121cd39-kube-api-access-nvtvg\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002100 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-etcd-client\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-config\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002151 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002177 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-client-ca\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002211 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59p9v\" (UniqueName: \"kubernetes.io/projected/75ac4f31-f970-4342-ac67-8e1354f183e2-kube-api-access-59p9v\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002237 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002266 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-encryption-config\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002290 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a4ac4-4417-42fd-8165-1be41729d64f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.002325 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.502298745 +0000 UTC m=+132.697473037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002375 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f7qj\" (UniqueName: \"kubernetes.io/projected/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-kube-api-access-2f7qj\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002407 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjrs\" (UniqueName: \"kubernetes.io/projected/581444db-9870-4a61-a384-c3a96bff71de-kube-api-access-dtjrs\") pod \"migrator-59844c95c7-7cqw2\" (UID: \"581444db-9870-4a61-a384-c3a96bff71de\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bcfafe-52bc-44c0-813a-fede5ecfdc41-serving-cert\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80775159-a100-48e2-a896-ff8c5121cd39-serving-cert\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b72ec18d-92ae-4544-957a-036f8e948b1c-proxy-tls\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002504 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec41424c-e403-485d-aa92-32c0c41e7238-config-volume\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-images\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-key\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-serving-cert\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhx4\" (UniqueName: \"kubernetes.io/projected/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-kube-api-access-wqhx4\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6478\" (UniqueName: \"kubernetes.io/projected/3b4f0840-cc07-4b0a-a96b-5534312b0553-kube-api-access-n6478\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002652 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-webhook-cert\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002667 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1426bd-91d8-43d6-8d72-5316200e13c7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-audit-policies\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002706 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002744 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-service-ca-bundle\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpzxp\" (UniqueName: \"kubernetes.io/projected/bd2a4ac4-4417-42fd-8165-1be41729d64f-kube-api-access-qpzxp\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002785 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwhg\" (UniqueName: \"kubernetes.io/projected/fc3b41fb-9bd1-4f00-9653-8e73a695de87-kube-api-access-ggwhg\") pod \"package-server-manager-789f6589d5-rbpqf\" (UID: \"fc3b41fb-9bd1-4f00-9653-8e73a695de87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-metrics-certs\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2a4ac4-4417-42fd-8165-1be41729d64f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b1426bd-91d8-43d6-8d72-5316200e13c7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002857 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-tmpfs\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002875 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2ht\" (UniqueName: \"kubernetes.io/projected/01bcfafe-52bc-44c0-813a-fede5ecfdc41-kube-api-access-mh2ht\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002895 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75ac4f31-f970-4342-ac67-8e1354f183e2-audit-dir\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-srv-cert\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002929 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b4f0840-cc07-4b0a-a96b-5534312b0553-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002955 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bqzb\" (UniqueName: \"kubernetes.io/projected/8f750ee8-dda0-4af2-a692-412153a3f80e-kube-api-access-2bqzb\") pod \"multus-admission-controller-857f4d67dd-jxfgc\" (UID: \"8f750ee8-dda0-4af2-a692-412153a3f80e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002972 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d932b1-7a66-4020-b200-fb2ae977f7bf-service-ca-bundle\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.002998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tkjm\" (UniqueName: \"kubernetes.io/projected/27186c4d-b911-4ef8-8e86-082ddf35d6b7-kube-api-access-9tkjm\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-default-certificate\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003051 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdzbg\" (UniqueName: \"kubernetes.io/projected/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-kube-api-access-hdzbg\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003089 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-serving-cert\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003108 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf7lb\" (UniqueName: \"kubernetes.io/projected/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-kube-api-access-lf7lb\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003126 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8f750ee8-dda0-4af2-a692-412153a3f80e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jxfgc\" (UID: \"8f750ee8-dda0-4af2-a692-412153a3f80e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b1426bd-91d8-43d6-8d72-5316200e13c7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmplb\" (UniqueName: \"kubernetes.io/projected/03ddc93f-c104-482e-a615-1f6ce52c62b8-kube-api-access-fmplb\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003292 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bcfafe-52bc-44c0-813a-fede5ecfdc41-config\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003348 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b4f0840-cc07-4b0a-a96b-5534312b0553-proxy-tls\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003373 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s6jg\" (UniqueName: \"kubernetes.io/projected/f3d932b1-7a66-4020-b200-fb2ae977f7bf-kube-api-access-2s6jg\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003427 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-cabundle\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003451 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec41424c-e403-485d-aa92-32c0c41e7238-secret-volume\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003562 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-client-ca\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.003876 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-service-ca-bundle\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.004605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b4f0840-cc07-4b0a-a96b-5534312b0553-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.004831 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-config\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.005400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d932b1-7a66-4020-b200-fb2ae977f7bf-service-ca-bundle\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.006162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-tmpfs\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.006209 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75ac4f31-f970-4342-ac67-8e1354f183e2-audit-dir\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.006426 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.010327 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-metrics-certs\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.010639 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-stats-auth\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.012214 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b1426bd-91d8-43d6-8d72-5316200e13c7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.012505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f3d932b1-7a66-4020-b200-fb2ae977f7bf-default-certificate\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.013488 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-serving-cert\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.021567 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.033760 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4wtmn"] Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.040908 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: W1202 10:10:08.045917 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd067220e_9800_4c06_b0e2_01d1be8b8986.slice/crio-660bf5b63a50e12fe3218c7768c57263aaff1107e20ec9c652f3605160149e40 WatchSource:0}: Error finding container 660bf5b63a50e12fe3218c7768c57263aaff1107e20ec9c652f3605160149e40: Status 404 returned error can't find the container with id 660bf5b63a50e12fe3218c7768c57263aaff1107e20ec9c652f3605160149e40 Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.048054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.060917 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.060932 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfw69"] Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.061711 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-knbz8"] Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.067306 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.067387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.069279 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.080845 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.100570 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.104717 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.105246 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.605228135 +0000 UTC m=+132.800402437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.110013 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80775159-a100-48e2-a896-ff8c5121cd39-serving-cert\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.121105 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.147332 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.157108 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.161608 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.180177 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.183681 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80775159-a100-48e2-a896-ff8c5121cd39-config\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.200769 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.206810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.207784 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.707743093 +0000 UTC m=+132.902917545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.220761 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.240773 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.260548 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.281304 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.300588 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.309142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.309623 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.809603691 +0000 UTC m=+133.004777993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.321245 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.331450 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-serving-cert\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.341940 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.362291 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.380631 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.401767 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.406282 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a4ac4-4417-42fd-8165-1be41729d64f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.410529 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.410687 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.910665126 +0000 UTC m=+133.105839438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.410718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.411196 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:08.911181351 +0000 UTC m=+133.106355653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.421365 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.427809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2a4ac4-4417-42fd-8165-1be41729d64f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.440910 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.461487 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.466877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-etcd-client\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.480544 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.501731 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.504510 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.512291 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.512591 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.012552896 +0000 UTC m=+133.207727228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.512826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.513336 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.013317938 +0000 UTC m=+133.208492250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.520671 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.527577 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-audit-policies\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.541408 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.545399 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75ac4f31-f970-4342-ac67-8e1354f183e2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.561221 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.581352 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.595478 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8f750ee8-dda0-4af2-a692-412153a3f80e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jxfgc\" (UID: \"8f750ee8-dda0-4af2-a692-412153a3f80e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.601017 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.609725 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b4f0840-cc07-4b0a-a96b-5534312b0553-proxy-tls\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.615826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.615928 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.115905098 +0000 UTC m=+133.311079400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.616037 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.616375 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.116365572 +0000 UTC m=+133.311539874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.621117 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.641619 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.682143 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.688225 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/75ac4f31-f970-4342-ac67-8e1354f183e2-encryption-config\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.702643 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.717698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.717927 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.217895691 +0000 UTC m=+133.413070013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.718264 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.718683 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.218665303 +0000 UTC m=+133.413839615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.721024 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.739275 4813 request.go:700] Waited for 1.001703413s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0 Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.741673 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.749530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-webhook-cert\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.751175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.760934 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.780452 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.789380 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bcfafe-52bc-44c0-813a-fede5ecfdc41-serving-cert\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.800356 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.814217 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ab6ba98-ca13-4026-b2b6-340906a28b6c" containerID="1b097ac6e6fd00bdbad674437de7e399b9ffc539e7042f77e790f052ac24395e" exitCode=0 Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.814352 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" event={"ID":"8ab6ba98-ca13-4026-b2b6-340906a28b6c","Type":"ContainerDied","Data":"1b097ac6e6fd00bdbad674437de7e399b9ffc539e7042f77e790f052ac24395e"} Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.814434 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" event={"ID":"8ab6ba98-ca13-4026-b2b6-340906a28b6c","Type":"ContainerStarted","Data":"7a29982421da3bcbcc8cabbada0751c9d7f7a8432b3cb5333bf819c3a9d4ebdd"} Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.817457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" event={"ID":"17a5f145-950f-4585-a991-6bbe400f41d3","Type":"ContainerStarted","Data":"4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df"} Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.817503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" event={"ID":"17a5f145-950f-4585-a991-6bbe400f41d3","Type":"ContainerStarted","Data":"58f8a37497c8cf5654579aba870327e374a8881a738c08cbb7aa0321164ff6d1"} Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.817811 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.820335 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.820503 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.32047712 +0000 UTC m=+133.515651432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.820578 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" event={"ID":"d067220e-9800-4c06-b0e2-01d1be8b8986","Type":"ContainerStarted","Data":"8a52c1dc7f5598d131d382c81e90e4aa556fced3e220ab97e532f19cc040dd7c"} Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.820613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" event={"ID":"d067220e-9800-4c06-b0e2-01d1be8b8986","Type":"ContainerStarted","Data":"660bf5b63a50e12fe3218c7768c57263aaff1107e20ec9c652f3605160149e40"} Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.820896 4813 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bfw69 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.820957 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" podUID="17a5f145-950f-4585-a991-6bbe400f41d3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.821446 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.824738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.825450 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.325419292 +0000 UTC m=+133.520593604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.842505 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.862341 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.867670 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-profile-collector-cert\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.868612 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec41424c-e403-485d-aa92-32c0c41e7238-secret-volume\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.881730 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.901105 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.908176 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec41424c-e403-485d-aa92-32c0c41e7238-config-volume\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.921243 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.926239 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.926419 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.426391478 +0000 UTC m=+133.621565790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.926655 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:08 crc kubenswrapper[4813]: E1202 10:10:08.927417 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.427388749 +0000 UTC m=+133.622563071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.928032 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-srv-cert\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.940813 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.946306 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bcfafe-52bc-44c0-813a-fede5ecfdc41-config\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.980946 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9t9h\" (UniqueName: \"kubernetes.io/projected/b285edf6-2c62-4357-9caa-c77feb57ff2d-kube-api-access-s9t9h\") pod \"openshift-config-operator-7777fb866f-9rnfw\" (UID: \"b285edf6-2c62-4357-9caa-c77feb57ff2d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:08 crc kubenswrapper[4813]: I1202 10:10:08.985512 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.002944 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.005909 4813 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006058 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca podName:03ddc93f-c104-482e-a615-1f6ce52c62b8 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.50601064 +0000 UTC m=+133.701184962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca") pod "marketplace-operator-79b997595-tt449" (UID: "03ddc93f-c104-482e-a615-1f6ce52c62b8") : failed to sync configmap cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006549 4813 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006622 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-key podName:27186c4d-b911-4ef8-8e86-082ddf35d6b7 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.506596358 +0000 UTC m=+133.701770670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-key") pod "service-ca-9c57cc56f-22vj7" (UID: "27186c4d-b911-4ef8-8e86-082ddf35d6b7") : failed to sync secret cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006656 4813 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006707 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72ec18d-92ae-4544-957a-036f8e948b1c-proxy-tls podName:b72ec18d-92ae-4544-957a-036f8e948b1c nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.506696111 +0000 UTC m=+133.701870423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/b72ec18d-92ae-4544-957a-036f8e948b1c-proxy-tls") pod "machine-config-operator-74547568cd-vn2ml" (UID: "b72ec18d-92ae-4544-957a-036f8e948b1c") : failed to sync secret cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006737 4813 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006779 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-images podName:b72ec18d-92ae-4544-957a-036f8e948b1c nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.506769273 +0000 UTC m=+133.701943595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-images") pod "machine-config-operator-74547568cd-vn2ml" (UID: "b72ec18d-92ae-4544-957a-036f8e948b1c") : failed to sync configmap cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006826 4813 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006888 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics podName:03ddc93f-c104-482e-a615-1f6ce52c62b8 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.506865866 +0000 UTC m=+133.702040178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics") pod "marketplace-operator-79b997595-tt449" (UID: "03ddc93f-c104-482e-a615-1f6ce52c62b8") : failed to sync secret cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006930 4813 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.006996 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc3b41fb-9bd1-4f00-9653-8e73a695de87-package-server-manager-serving-cert podName:fc3b41fb-9bd1-4f00-9653-8e73a695de87 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.50698311 +0000 UTC m=+133.702157432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/fc3b41fb-9bd1-4f00-9653-8e73a695de87-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-rbpqf" (UID: "fc3b41fb-9bd1-4f00-9653-8e73a695de87") : failed to sync secret cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.007655 4813 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.007744 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-cabundle podName:27186c4d-b911-4ef8-8e86-082ddf35d6b7 nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.507709852 +0000 UTC m=+133.702884164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-cabundle") pod "service-ca-9c57cc56f-22vj7" (UID: "27186c4d-b911-4ef8-8e86-082ddf35d6b7") : failed to sync configmap cache: timed out waiting for the condition Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.021782 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.027790 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.028695 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.528553561 +0000 UTC m=+133.723728033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.048582 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.060437 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.100713 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmpj8\" (UniqueName: \"kubernetes.io/projected/048e92fd-979c-460a-a018-cabbb1357848-kube-api-access-cmpj8\") pod \"openshift-apiserver-operator-796bbdcf4f-qrb5z\" (UID: \"048e92fd-979c-460a-a018-cabbb1357848\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.117297 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7zt\" (UniqueName: \"kubernetes.io/projected/94edb821-0d5a-478e-9582-0e931d97b222-kube-api-access-kx7zt\") pod \"machine-approver-56656f9798-v4lht\" (UID: \"94edb821-0d5a-478e-9582-0e931d97b222\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.130562 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.131011 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.630991313 +0000 UTC m=+133.826165615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.138613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54l6x\" (UniqueName: \"kubernetes.io/projected/3ca46876-8b39-440e-a82f-b6eb424cca00-kube-api-access-54l6x\") pod \"downloads-7954f5f757-8hbqc\" (UID: \"3ca46876-8b39-440e-a82f-b6eb424cca00\") " pod="openshift-console/downloads-7954f5f757-8hbqc" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.159815 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.162871 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvzm\" (UniqueName: \"kubernetes.io/projected/e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7-kube-api-access-rfvzm\") pod \"cluster-samples-operator-665b6dd947-zzss2\" (UID: \"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.167351 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.181132 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.204196 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.223383 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.227296 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.232310 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.232984 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.732962369 +0000 UTC m=+133.928136681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.240983 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.261758 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.274549 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.275267 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.282595 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.303831 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.322647 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.335012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.335632 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.835616887 +0000 UTC m=+134.030791189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.342727 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.360625 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.361124 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8hbqc" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.380246 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.405750 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.421282 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.436255 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.436426 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.936402968 +0000 UTC m=+134.131577270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.436587 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.436962 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:09.936950875 +0000 UTC m=+134.132125177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.444500 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.461552 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.461982 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z"] Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.485335 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.530506 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2"] Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.537325 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.548587 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.548831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.548918 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b72ec18d-92ae-4544-957a-036f8e948b1c-proxy-tls\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.548943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-images\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.548960 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-key\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.549124 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.549156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-cabundle\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.549181 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc3b41fb-9bd1-4f00-9653-8e73a695de87-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rbpqf\" (UID: \"fc3b41fb-9bd1-4f00-9653-8e73a695de87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.549975 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.550257 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.050231718 +0000 UTC m=+134.245406020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.550427 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.551487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-cabundle\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.552642 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b72ec18d-92ae-4544-957a-036f8e948b1c-images\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.560248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b72ec18d-92ae-4544-957a-036f8e948b1c-proxy-tls\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.561014 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.561412 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.561707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/27186c4d-b911-4ef8-8e86-082ddf35d6b7-signing-key\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.563382 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc3b41fb-9bd1-4f00-9653-8e73a695de87-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rbpqf\" (UID: \"fc3b41fb-9bd1-4f00-9653-8e73a695de87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.583182 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.601940 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.623359 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.644513 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw"] Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.649715 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.652180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.652748 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.152726351 +0000 UTC m=+134.347900663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.667044 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.684525 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.702999 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.736633 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.747134 4813 request.go:700] Waited for 1.896147321s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.757743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.758224 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.258207596 +0000 UTC m=+134.453381898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.758515 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.767222 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.773221 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8hbqc"] Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.840651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bx9\" (UniqueName: \"kubernetes.io/projected/c5909f8e-1a62-455a-a85a-73d85747e3a7-kube-api-access-x7bx9\") pod \"oauth-openshift-558db77b4-g8r9r\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.851864 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbg7v\" (UniqueName: \"kubernetes.io/projected/e967798d-a0d2-40e4-af66-ba0d04ac8318-kube-api-access-vbg7v\") pod \"console-f9d7485db-8dtjd\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:09 crc kubenswrapper[4813]: W1202 10:10:09.852470 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ca46876_8b39_440e_a82f_b6eb424cca00.slice/crio-c5ffc2f07e7748121f0ab9f6413e7109233d01b672067ac41b5b404c49b68a4f WatchSource:0}: Error finding container c5ffc2f07e7748121f0ab9f6413e7109233d01b672067ac41b5b404c49b68a4f: Status 404 returned error can't find the container with id c5ffc2f07e7748121f0ab9f6413e7109233d01b672067ac41b5b404c49b68a4f Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.870706 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9dbt\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-kube-api-access-g9dbt\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.871295 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.871430 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04f208cb-9296-4a48-8f2c-d5589dad97a1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zr87q\" (UID: \"04f208cb-9296-4a48-8f2c-d5589dad97a1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.871794 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.371778158 +0000 UTC m=+134.566952460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.881819 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" event={"ID":"8ab6ba98-ca13-4026-b2b6-340906a28b6c","Type":"ContainerStarted","Data":"53cdca5c8212040714296e13c65989e44c3db0876bcf55226dd3abec6bf5db24"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.881915 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" event={"ID":"8ab6ba98-ca13-4026-b2b6-340906a28b6c","Type":"ContainerStarted","Data":"ea5deaead9cc8b6258d55af533d12caef769dab5290e27dceca065f461bcb841"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.884782 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" event={"ID":"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7","Type":"ContainerStarted","Data":"96219b22c152ba07e79ec12616f7687b77670c126b9178ca72ec53a5aeb59da9"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.886445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4979739-3dc4-4820-b52d-ad093d216bd7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.891945 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" event={"ID":"d067220e-9800-4c06-b0e2-01d1be8b8986","Type":"ContainerStarted","Data":"c3343a9a94e7521a0f325f43b4b4b9318b1ad0cc62efcda0fe55fc1f371245a3"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.912364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12650400-55e2-4496-a52e-eae7bd0434e9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.916788 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" event={"ID":"94edb821-0d5a-478e-9582-0e931d97b222","Type":"ContainerStarted","Data":"d20dae947fce779a848e99e86ed22cf9ffbce7e252b812684686ed009840c50e"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.916871 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" event={"ID":"94edb821-0d5a-478e-9582-0e931d97b222","Type":"ContainerStarted","Data":"6a2553f7875d9a0c2ab4c9c5a9e75dcf2f3fbd29b40a064e7c495907b3f843c0"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.930197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" event={"ID":"b285edf6-2c62-4357-9caa-c77feb57ff2d","Type":"ContainerStarted","Data":"1304cdb9cccef3a630d2f10fcfc5c2c903df4b8cee30d838bdd4a1b8589de068"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.934940 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2vnw\" (UniqueName: \"kubernetes.io/projected/12650400-55e2-4496-a52e-eae7bd0434e9-kube-api-access-h2vnw\") pod \"ingress-operator-5b745b69d9-x82gd\" (UID: \"12650400-55e2-4496-a52e-eae7bd0434e9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.938319 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbr72\" (UniqueName: \"kubernetes.io/projected/98c41744-524c-47d4-b78a-71f53480faba-kube-api-access-bbr72\") pod \"etcd-operator-b45778765-7nd9n\" (UID: \"98c41744-524c-47d4-b78a-71f53480faba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.940402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" event={"ID":"048e92fd-979c-460a-a018-cabbb1357848","Type":"ContainerStarted","Data":"9ab9787cbd885e086e01d621a2ac2853f71d746d19ad4f16d8e59ff7dfdf0674"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.940450 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" event={"ID":"048e92fd-979c-460a-a018-cabbb1357848","Type":"ContainerStarted","Data":"fafd17241e2446dbbfd644fe1aeb21dbf4928bce3ca684af0793a90a5934e371"} Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.948659 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.966965 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrcx\" (UniqueName: \"kubernetes.io/projected/b4979739-3dc4-4820-b52d-ad093d216bd7-kube-api-access-ztrcx\") pod \"cluster-image-registry-operator-dc59b4c8b-wr8n8\" (UID: \"b4979739-3dc4-4820-b52d-ad093d216bd7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.972723 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:09 crc kubenswrapper[4813]: E1202 10:10:09.973563 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.473534218 +0000 UTC m=+134.668708520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.978131 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf2q6\" (UniqueName: \"kubernetes.io/projected/a917dd4e-95f4-4b15-93f3-d7555f527969-kube-api-access-bf2q6\") pod \"console-operator-58897d9998-mbprt\" (UID: \"a917dd4e-95f4-4b15-93f3-d7555f527969\") " pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.978379 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.988480 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" Dec 02 10:10:09 crc kubenswrapper[4813]: I1202 10:10:09.997531 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.009809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-bound-sa-token\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.020159 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.025527 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvtvg\" (UniqueName: \"kubernetes.io/projected/80775159-a100-48e2-a896-ff8c5121cd39-kube-api-access-nvtvg\") pod \"authentication-operator-69f744f599-tlm8g\" (UID: \"80775159-a100-48e2-a896-ff8c5121cd39\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.028675 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.041508 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.042700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2bzl\" (UniqueName: \"kubernetes.io/projected/b72ec18d-92ae-4544-957a-036f8e948b1c-kube-api-access-t2bzl\") pod \"machine-config-operator-74547568cd-vn2ml\" (UID: \"b72ec18d-92ae-4544-957a-036f8e948b1c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.069634 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcls4\" (UniqueName: \"kubernetes.io/projected/ec41424c-e403-485d-aa92-32c0c41e7238-kube-api-access-hcls4\") pod \"collect-profiles-29411160-k99vg\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.073968 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.074479 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.574465324 +0000 UTC m=+134.769639626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.084280 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59p9v\" (UniqueName: \"kubernetes.io/projected/75ac4f31-f970-4342-ac67-8e1354f183e2-kube-api-access-59p9v\") pod \"apiserver-7bbb656c7d-lh74d\" (UID: \"75ac4f31-f970-4342-ac67-8e1354f183e2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.092971 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.108042 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpzxp\" (UniqueName: \"kubernetes.io/projected/bd2a4ac4-4417-42fd-8165-1be41729d64f-kube-api-access-qpzxp\") pod \"openshift-controller-manager-operator-756b6f6bc6-sklx8\" (UID: \"bd2a4ac4-4417-42fd-8165-1be41729d64f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.137206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwhg\" (UniqueName: \"kubernetes.io/projected/fc3b41fb-9bd1-4f00-9653-8e73a695de87-kube-api-access-ggwhg\") pod \"package-server-manager-789f6589d5-rbpqf\" (UID: \"fc3b41fb-9bd1-4f00-9653-8e73a695de87\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.142880 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bqzb\" (UniqueName: \"kubernetes.io/projected/8f750ee8-dda0-4af2-a692-412153a3f80e-kube-api-access-2bqzb\") pod \"multus-admission-controller-857f4d67dd-jxfgc\" (UID: \"8f750ee8-dda0-4af2-a692-412153a3f80e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.159973 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f7qj\" (UniqueName: \"kubernetes.io/projected/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-kube-api-access-2f7qj\") pod \"route-controller-manager-6576b87f9c-scvc2\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.176977 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.178174 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.678146193 +0000 UTC m=+134.873320495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.191347 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.207387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.211830 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjrs\" (UniqueName: \"kubernetes.io/projected/581444db-9870-4a61-a384-c3a96bff71de-kube-api-access-dtjrs\") pod \"migrator-59844c95c7-7cqw2\" (UID: \"581444db-9870-4a61-a384-c3a96bff71de\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.215558 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmplb\" (UniqueName: \"kubernetes.io/projected/03ddc93f-c104-482e-a615-1f6ce52c62b8-kube-api-access-fmplb\") pod \"marketplace-operator-79b997595-tt449\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.230445 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.241165 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s6jg\" (UniqueName: \"kubernetes.io/projected/f3d932b1-7a66-4020-b200-fb2ae977f7bf-kube-api-access-2s6jg\") pod \"router-default-5444994796-2bgmp\" (UID: \"f3d932b1-7a66-4020-b200-fb2ae977f7bf\") " pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.250841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tkjm\" (UniqueName: \"kubernetes.io/projected/27186c4d-b911-4ef8-8e86-082ddf35d6b7-kube-api-access-9tkjm\") pod \"service-ca-9c57cc56f-22vj7\" (UID: \"27186c4d-b911-4ef8-8e86-082ddf35d6b7\") " pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.266524 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.273408 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.281037 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.281427 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.78141267 +0000 UTC m=+134.976586972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.281809 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.301259 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6478\" (UniqueName: \"kubernetes.io/projected/3b4f0840-cc07-4b0a-a96b-5534312b0553-kube-api-access-n6478\") pod \"machine-config-controller-84d6567774-j2p9n\" (UID: \"3b4f0840-cc07-4b0a-a96b-5534312b0553\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.326872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.359637 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.366362 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.376441 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.387356 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.388008 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.388294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.388736 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.88871585 +0000 UTC m=+135.083890152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.395054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhx4\" (UniqueName: \"kubernetes.io/projected/8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a-kube-api-access-wqhx4\") pod \"kube-storage-version-migrator-operator-b67b599dd-rvm28\" (UID: \"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.401814 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdzbg\" (UniqueName: \"kubernetes.io/projected/ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b-kube-api-access-hdzbg\") pod \"packageserver-d55dfcdfc-p8fvw\" (UID: \"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.403222 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.407893 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b1426bd-91d8-43d6-8d72-5316200e13c7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-h2nx2\" (UID: \"2b1426bd-91d8-43d6-8d72-5316200e13c7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.408105 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf7lb\" (UniqueName: \"kubernetes.io/projected/60b9ebac-2fc0-4238-92bb-6d3c25e0c492-kube-api-access-lf7lb\") pod \"catalog-operator-68c6474976-b4frq\" (UID: \"60b9ebac-2fc0-4238-92bb-6d3c25e0c492\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.410693 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2ht\" (UniqueName: \"kubernetes.io/projected/01bcfafe-52bc-44c0-813a-fede5ecfdc41-kube-api-access-mh2ht\") pod \"service-ca-operator-777779d784-lq6rn\" (UID: \"01bcfafe-52bc-44c0-813a-fede5ecfdc41\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.489525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhlhd\" (UniqueName: \"kubernetes.io/projected/58d04032-dd97-469b-a9e7-de98cd15f688-kube-api-access-rhlhd\") pod \"dns-operator-744455d44c-f2zd8\" (UID: \"58d04032-dd97-469b-a9e7-de98cd15f688\") " pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.489569 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8fh\" (UniqueName: \"kubernetes.io/projected/b36e8c79-b5eb-44db-9193-39af9560315e-kube-api-access-jk8fh\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.489594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.489615 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b36e8c79-b5eb-44db-9193-39af9560315e-srv-cert\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.489650 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5967be2f-1cd7-4fbf-9482-92dd8689abdf-config\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.489686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtws\" (UniqueName: \"kubernetes.io/projected/83fd9a6f-d3fc-4e3e-8924-d363bab949eb-kube-api-access-7mtws\") pod \"control-plane-machine-set-operator-78cbb6b69f-5jfdj\" (UID: \"83fd9a6f-d3fc-4e3e-8924-d363bab949eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.489741 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58d04032-dd97-469b-a9e7-de98cd15f688-metrics-tls\") pod \"dns-operator-744455d44c-f2zd8\" (UID: \"58d04032-dd97-469b-a9e7-de98cd15f688\") " pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.490590 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:10.990573833 +0000 UTC m=+135.185748135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.491003 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/83fd9a6f-d3fc-4e3e-8924-d363bab949eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5jfdj\" (UID: \"83fd9a6f-d3fc-4e3e-8924-d363bab949eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.491024 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5967be2f-1cd7-4fbf-9482-92dd8689abdf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.491060 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b36e8c79-b5eb-44db-9193-39af9560315e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.491100 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5967be2f-1cd7-4fbf-9482-92dd8689abdf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.502146 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.513753 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.522451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.589647 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.591514 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.591867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-registration-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-mountpoint-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nzl\" (UniqueName: \"kubernetes.io/projected/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-kube-api-access-v6nzl\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhlhd\" (UniqueName: \"kubernetes.io/projected/58d04032-dd97-469b-a9e7-de98cd15f688-kube-api-access-rhlhd\") pod \"dns-operator-744455d44c-f2zd8\" (UID: \"58d04032-dd97-469b-a9e7-de98cd15f688\") " pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592300 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk8fh\" (UniqueName: \"kubernetes.io/projected/b36e8c79-b5eb-44db-9193-39af9560315e-kube-api-access-jk8fh\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592384 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a39c3d8-3bac-4d27-92f8-7133870d369a-certs\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b36e8c79-b5eb-44db-9193-39af9560315e-srv-cert\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592442 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-metrics-tls\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.592696 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.092675044 +0000 UTC m=+135.287849346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419-cert\") pod \"ingress-canary-kmcbt\" (UID: \"4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419\") " pod="openshift-ingress-canary/ingress-canary-kmcbt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592792 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5967be2f-1cd7-4fbf-9482-92dd8689abdf-config\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592842 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-config-volume\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592891 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mtws\" (UniqueName: \"kubernetes.io/projected/83fd9a6f-d3fc-4e3e-8924-d363bab949eb-kube-api-access-7mtws\") pod \"control-plane-machine-set-operator-78cbb6b69f-5jfdj\" (UID: \"83fd9a6f-d3fc-4e3e-8924-d363bab949eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a39c3d8-3bac-4d27-92f8-7133870d369a-node-bootstrap-token\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.592991 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-socket-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593016 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58d04032-dd97-469b-a9e7-de98cd15f688-metrics-tls\") pod \"dns-operator-744455d44c-f2zd8\" (UID: \"58d04032-dd97-469b-a9e7-de98cd15f688\") " pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-csi-data-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593059 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97bv7\" (UniqueName: \"kubernetes.io/projected/4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419-kube-api-access-97bv7\") pod \"ingress-canary-kmcbt\" (UID: \"4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419\") " pod="openshift-ingress-canary/ingress-canary-kmcbt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/83fd9a6f-d3fc-4e3e-8924-d363bab949eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5jfdj\" (UID: \"83fd9a6f-d3fc-4e3e-8924-d363bab949eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593372 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5967be2f-1cd7-4fbf-9482-92dd8689abdf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593453 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b36e8c79-b5eb-44db-9193-39af9560315e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593548 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5967be2f-1cd7-4fbf-9482-92dd8689abdf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593573 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-plugins-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593596 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n88dx\" (UniqueName: \"kubernetes.io/projected/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-kube-api-access-n88dx\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.593705 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvm8\" (UniqueName: \"kubernetes.io/projected/3a39c3d8-3bac-4d27-92f8-7133870d369a-kube-api-access-ndvm8\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.596875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5967be2f-1cd7-4fbf-9482-92dd8689abdf-config\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.602712 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b36e8c79-b5eb-44db-9193-39af9560315e-srv-cert\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.608598 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.630198 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58d04032-dd97-469b-a9e7-de98cd15f688-metrics-tls\") pod \"dns-operator-744455d44c-f2zd8\" (UID: \"58d04032-dd97-469b-a9e7-de98cd15f688\") " pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.630253 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/83fd9a6f-d3fc-4e3e-8924-d363bab949eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5jfdj\" (UID: \"83fd9a6f-d3fc-4e3e-8924-d363bab949eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.630453 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5967be2f-1cd7-4fbf-9482-92dd8689abdf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.630836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b36e8c79-b5eb-44db-9193-39af9560315e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.639500 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.640893 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.675404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk8fh\" (UniqueName: \"kubernetes.io/projected/b36e8c79-b5eb-44db-9193-39af9560315e-kube-api-access-jk8fh\") pod \"olm-operator-6b444d44fb-lxq5m\" (UID: \"b36e8c79-b5eb-44db-9193-39af9560315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-mountpoint-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6nzl\" (UniqueName: \"kubernetes.io/projected/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-kube-api-access-v6nzl\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696283 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a39c3d8-3bac-4d27-92f8-7133870d369a-certs\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-metrics-tls\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419-cert\") pod \"ingress-canary-kmcbt\" (UID: \"4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419\") " pod="openshift-ingress-canary/ingress-canary-kmcbt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-config-volume\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696386 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a39c3d8-3bac-4d27-92f8-7133870d369a-node-bootstrap-token\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696409 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-socket-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696393 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-mountpoint-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-csi-data-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696520 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-csi-data-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696539 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97bv7\" (UniqueName: \"kubernetes.io/projected/4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419-kube-api-access-97bv7\") pod \"ingress-canary-kmcbt\" (UID: \"4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419\") " pod="openshift-ingress-canary/ingress-canary-kmcbt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-plugins-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n88dx\" (UniqueName: \"kubernetes.io/projected/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-kube-api-access-n88dx\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696728 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndvm8\" (UniqueName: \"kubernetes.io/projected/3a39c3d8-3bac-4d27-92f8-7133870d369a-kube-api-access-ndvm8\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.696814 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-registration-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.696948 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.196935731 +0000 UTC m=+135.392110033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.697276 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-registration-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.699802 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-plugins-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.700286 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-config-volume\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.702917 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-socket-dir\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.715094 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.722769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mtws\" (UniqueName: \"kubernetes.io/projected/83fd9a6f-d3fc-4e3e-8924-d363bab949eb-kube-api-access-7mtws\") pod \"control-plane-machine-set-operator-78cbb6b69f-5jfdj\" (UID: \"83fd9a6f-d3fc-4e3e-8924-d363bab949eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.722783 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-metrics-tls\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.722868 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhlhd\" (UniqueName: \"kubernetes.io/projected/58d04032-dd97-469b-a9e7-de98cd15f688-kube-api-access-rhlhd\") pod \"dns-operator-744455d44c-f2zd8\" (UID: \"58d04032-dd97-469b-a9e7-de98cd15f688\") " pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.723314 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3a39c3d8-3bac-4d27-92f8-7133870d369a-certs\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.723570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419-cert\") pod \"ingress-canary-kmcbt\" (UID: \"4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419\") " pod="openshift-ingress-canary/ingress-canary-kmcbt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.738320 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3a39c3d8-3bac-4d27-92f8-7133870d369a-node-bootstrap-token\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.742567 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5967be2f-1cd7-4fbf-9482-92dd8689abdf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5wvlc\" (UID: \"5967be2f-1cd7-4fbf-9482-92dd8689abdf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.785513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97bv7\" (UniqueName: \"kubernetes.io/projected/4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419-kube-api-access-97bv7\") pod \"ingress-canary-kmcbt\" (UID: \"4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419\") " pod="openshift-ingress-canary/ingress-canary-kmcbt" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.788495 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6nzl\" (UniqueName: \"kubernetes.io/projected/84cafe57-4b2e-4a86-bb75-758c6cf1e0f6-kube-api-access-v6nzl\") pod \"dns-default-9ztk4\" (UID: \"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6\") " pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.796360 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n88dx\" (UniqueName: \"kubernetes.io/projected/0dd9da09-12ea-444d-8f15-4e1acbffb8d6-kube-api-access-n88dx\") pod \"csi-hostpathplugin-lzjn8\" (UID: \"0dd9da09-12ea-444d-8f15-4e1acbffb8d6\") " pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.799428 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.799533 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.299511737 +0000 UTC m=+135.494686049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.799936 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.801195 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.301184478 +0000 UTC m=+135.496358770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.807939 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndvm8\" (UniqueName: \"kubernetes.io/projected/3a39c3d8-3bac-4d27-92f8-7133870d369a-kube-api-access-ndvm8\") pod \"machine-config-server-q4k5s\" (UID: \"3a39c3d8-3bac-4d27-92f8-7133870d369a\") " pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.839496 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.859386 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.859417 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.902733 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:10 crc kubenswrapper[4813]: E1202 10:10:10.903230 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.403210487 +0000 UTC m=+135.598384789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:10 crc kubenswrapper[4813]: I1202 10:10:10.968816 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.013722 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.018395 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" event={"ID":"94edb821-0d5a-478e-9582-0e931d97b222","Type":"ContainerStarted","Data":"761aff6ce07609a374f2b98eb88f2ff61af800fd5b0570410f62840932c22806"} Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.018722 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.019049 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.519034579 +0000 UTC m=+135.714208881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.024304 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-q4k5s" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.026757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2bgmp" event={"ID":"f3d932b1-7a66-4020-b200-fb2ae977f7bf","Type":"ContainerStarted","Data":"67be3d55d84d8fb0bce6cf09329e8f68678d31784e476dc2a81eb80fb48072a4"} Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.038956 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8hbqc" event={"ID":"3ca46876-8b39-440e-a82f-b6eb424cca00","Type":"ContainerStarted","Data":"dd13a13c5997873116d0296bcd15377ae3cd30a997552404e0dbc458dfd10cf2"} Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.039103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8hbqc" event={"ID":"3ca46876-8b39-440e-a82f-b6eb424cca00","Type":"ContainerStarted","Data":"c5ffc2f07e7748121f0ab9f6413e7109233d01b672067ac41b5b404c49b68a4f"} Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.040063 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8hbqc" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.042436 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.052254 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hbqc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.052326 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8hbqc" podUID="3ca46876-8b39-440e-a82f-b6eb424cca00" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.052830 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kmcbt" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.070821 4813 generic.go:334] "Generic (PLEG): container finished" podID="b285edf6-2c62-4357-9caa-c77feb57ff2d" containerID="930dd831a9135996830462843287efdfbd9f17cb43f4d38c5fd3d2565e06c36d" exitCode=0 Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.070913 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" event={"ID":"b285edf6-2c62-4357-9caa-c77feb57ff2d","Type":"ContainerDied","Data":"930dd831a9135996830462843287efdfbd9f17cb43f4d38c5fd3d2565e06c36d"} Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.097342 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" event={"ID":"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7","Type":"ContainerStarted","Data":"f29bd5dd367eb1c1f4d4c43ef45056111b0fbbfa54c74f69c46a33d7db9f87d9"} Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.097400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" event={"ID":"e8b3014b-4c47-4c15-b90d-0c2aafcbe0c7","Type":"ContainerStarted","Data":"625bbcc9d199d20e69c787104bda7b9412a79ced413b51a471218d739e386734"} Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.127592 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.130237 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.630211308 +0000 UTC m=+135.825385700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.229020 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.231065 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.73104548 +0000 UTC m=+135.926219792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.266601 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4wtmn" podStartSLOduration=116.26658278 podStartE2EDuration="1m56.26658278s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:11.26464696 +0000 UTC m=+135.459821262" watchObservedRunningTime="2025-12-02 10:10:11.26658278 +0000 UTC m=+135.461757082" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.341383 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.341986 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.841946741 +0000 UTC m=+136.037121033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.394520 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qrb5z" podStartSLOduration=117.394497932 podStartE2EDuration="1m57.394497932s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:11.369278239 +0000 UTC m=+135.564452541" watchObservedRunningTime="2025-12-02 10:10:11.394497932 +0000 UTC m=+135.589672234" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.458872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.470842 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:11.959377032 +0000 UTC m=+136.154551334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.560241 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.560778 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.06075833 +0000 UTC m=+136.255932632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.583663 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" podStartSLOduration=117.583634402 podStartE2EDuration="1m57.583634402s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:11.575494942 +0000 UTC m=+135.770669244" watchObservedRunningTime="2025-12-02 10:10:11.583634402 +0000 UTC m=+135.778808704" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.663659 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.664358 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.164345357 +0000 UTC m=+136.359519659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.704531 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" podStartSLOduration=117.704509259 podStartE2EDuration="1m57.704509259s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:11.703362793 +0000 UTC m=+135.898537105" watchObservedRunningTime="2025-12-02 10:10:11.704509259 +0000 UTC m=+135.899683551" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.765788 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.766165 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.266149409 +0000 UTC m=+136.461323711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.867954 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.868388 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.368374314 +0000 UTC m=+136.563548606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.892165 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lht" podStartSLOduration=118.892142142 podStartE2EDuration="1m58.892142142s" podCreationTimestamp="2025-12-02 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:11.89140676 +0000 UTC m=+136.086581082" watchObservedRunningTime="2025-12-02 10:10:11.892142142 +0000 UTC m=+136.087316444" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.938710 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8hbqc" podStartSLOduration=117.93868752 podStartE2EDuration="1m57.93868752s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:11.933566173 +0000 UTC m=+136.128740475" watchObservedRunningTime="2025-12-02 10:10:11.93868752 +0000 UTC m=+136.133861822" Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.968772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:11 crc kubenswrapper[4813]: E1202 10:10:11.969218 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.469198085 +0000 UTC m=+136.664372397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:11 crc kubenswrapper[4813]: I1202 10:10:11.973542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.014391 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zzss2" podStartSLOduration=119.014369801 podStartE2EDuration="1m59.014369801s" podCreationTimestamp="2025-12-02 10:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:12.012700689 +0000 UTC m=+136.207875011" watchObservedRunningTime="2025-12-02 10:10:12.014369801 +0000 UTC m=+136.209544093" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.021387 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.033296 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7nd9n"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.043152 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.069781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.070140 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.570128 +0000 UTC m=+136.765302302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: W1202 10:10:12.070305 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98c41744_524c_47d4_b78a_71f53480faba.slice/crio-52de29b6ca06f3aa4ecb0c7b9fba2a52c692d8c9975d4d1b6d86cc1c32e939b3 WatchSource:0}: Error finding container 52de29b6ca06f3aa4ecb0c7b9fba2a52c692d8c9975d4d1b6d86cc1c32e939b3: Status 404 returned error can't find the container with id 52de29b6ca06f3aa4ecb0c7b9fba2a52c692d8c9975d4d1b6d86cc1c32e939b3 Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.116123 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" event={"ID":"04f208cb-9296-4a48-8f2c-d5589dad97a1","Type":"ContainerStarted","Data":"dfa46899d92072493403c1047def4933c4fb6c48b672cbfb3f522cbf56c468e9"} Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.131666 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2bgmp" event={"ID":"f3d932b1-7a66-4020-b200-fb2ae977f7bf","Type":"ContainerStarted","Data":"4d1af9764476171a31ab66c8a52f3ad46acd9daeb84865b0eb2d86fb78e5aac1"} Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.133702 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" event={"ID":"98c41744-524c-47d4-b78a-71f53480faba","Type":"ContainerStarted","Data":"52de29b6ca06f3aa4ecb0c7b9fba2a52c692d8c9975d4d1b6d86cc1c32e939b3"} Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.147405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" event={"ID":"b4979739-3dc4-4820-b52d-ad093d216bd7","Type":"ContainerStarted","Data":"0085927ae9a27883ee36162e6dbb97e3f3274557ef855510de78037cc7ec3f81"} Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.162801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-q4k5s" event={"ID":"3a39c3d8-3bac-4d27-92f8-7133870d369a","Type":"ContainerStarted","Data":"3cb4a47034985c1876621cff446158e98b359469108f4ae85aea07c02c9d8113"} Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.163227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-q4k5s" event={"ID":"3a39c3d8-3bac-4d27-92f8-7133870d369a","Type":"ContainerStarted","Data":"2720ec0b2a0922c9372480a6b145ad5ee06fc966b8dea86e25ec0dc264be20a4"} Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.175454 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.175643 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" event={"ID":"b285edf6-2c62-4357-9caa-c77feb57ff2d","Type":"ContainerStarted","Data":"9b2254cd131fccacf6cc46424420611847aa91603ac868d299ddb34757a8c288"} Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.176473 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.176621 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.676601115 +0000 UTC m=+136.871775417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.180856 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hbqc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.180903 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8hbqc" podUID="3ca46876-8b39-440e-a82f-b6eb424cca00" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.250970 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8dtjd"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.251035 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mbprt"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.273520 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.279252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.280257 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.780234523 +0000 UTC m=+136.975408815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.297989 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g8r9r"] Dec 02 10:10:12 crc kubenswrapper[4813]: W1202 10:10:12.301161 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12650400_55e2_4496_a52e_eae7bd0434e9.slice/crio-d95f0895ba8afc0bd838bd202fe5eab2eb75a5ad3dd9a0d6657d46bef5ca331b WatchSource:0}: Error finding container d95f0895ba8afc0bd838bd202fe5eab2eb75a5ad3dd9a0d6657d46bef5ca331b: Status 404 returned error can't find the container with id d95f0895ba8afc0bd838bd202fe5eab2eb75a5ad3dd9a0d6657d46bef5ca331b Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.305707 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.381265 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.381722 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.881700985 +0000 UTC m=+137.076875287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.489817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.490445 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:12.990433119 +0000 UTC m=+137.185607421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.503427 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.505050 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.508698 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tlm8g"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.516397 4813 patch_prober.go:28] interesting pod/router-default-5444994796-2bgmp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:10:12 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Dec 02 10:10:12 crc kubenswrapper[4813]: [+]process-running ok Dec 02 10:10:12 crc kubenswrapper[4813]: healthz check failed Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.516466 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2bgmp" podUID="f3d932b1-7a66-4020-b200-fb2ae977f7bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.516660 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.534960 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jxfgc"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.545133 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.557637 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tt449"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.571302 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-q4k5s" podStartSLOduration=5.571278478 podStartE2EDuration="5.571278478s" podCreationTimestamp="2025-12-02 10:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:12.570958078 +0000 UTC m=+136.766132380" watchObservedRunningTime="2025-12-02 10:10:12.571278478 +0000 UTC m=+136.766452780" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.591918 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.592330 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.092314013 +0000 UTC m=+137.287488305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.614837 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2bgmp" podStartSLOduration=118.614816033 podStartE2EDuration="1m58.614816033s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:12.613198934 +0000 UTC m=+136.808373236" watchObservedRunningTime="2025-12-02 10:10:12.614816033 +0000 UTC m=+136.809990335" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.653509 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" podStartSLOduration=118.653484949 podStartE2EDuration="1m58.653484949s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:12.652914811 +0000 UTC m=+136.848089113" watchObservedRunningTime="2025-12-02 10:10:12.653484949 +0000 UTC m=+136.848659251" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.685915 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.690119 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.695169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.695496 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.195480507 +0000 UTC m=+137.390654809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.717642 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9ztk4"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.720331 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f2zd8"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.721283 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.734142 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.735431 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.787297 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-22vj7"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.790463 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.794315 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.796109 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lzjn8"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.796642 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.797014 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.29699462 +0000 UTC m=+137.492168922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.820589 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.852655 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.852726 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.870540 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.874109 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.903566 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.903987 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kmcbt"] Dec 02 10:10:12 crc kubenswrapper[4813]: E1202 10:10:12.905366 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.405342322 +0000 UTC m=+137.600516624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.911010 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n"] Dec 02 10:10:12 crc kubenswrapper[4813]: I1202 10:10:12.917828 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2"] Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.005667 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.006223 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.506196995 +0000 UTC m=+137.701371297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.006296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.006835 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.506812134 +0000 UTC m=+137.701986436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.106958 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.107158 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.607133019 +0000 UTC m=+137.802307331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.107357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.107841 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.60782352 +0000 UTC m=+137.802997862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.181777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" event={"ID":"27186c4d-b911-4ef8-8e86-082ddf35d6b7","Type":"ContainerStarted","Data":"70499a90be836003b91f3017c66e1ace88d1030356b400a65f572bdafeb2142d"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.183536 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" event={"ID":"e30a4ae1-71f0-4065-8e7a-e75e2588aeac","Type":"ContainerStarted","Data":"de4e9b54dc7292cc7d17ef05e00f3a8f4edb3dfd8dfa75563fce19d43f6099ab"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.184711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" event={"ID":"bd2a4ac4-4417-42fd-8165-1be41729d64f","Type":"ContainerStarted","Data":"1a89b9c7affba81a8f9b8d18d2e474914c8e27ad8e8f8f31c22da555d1bababd"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.185751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" event={"ID":"5967be2f-1cd7-4fbf-9482-92dd8689abdf","Type":"ContainerStarted","Data":"929f36fe8fc7f30dbfaafd53c1bd3efbc6a0ddba18a8a39f85e89ffbc4bc5646"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.186784 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" event={"ID":"03ddc93f-c104-482e-a615-1f6ce52c62b8","Type":"ContainerStarted","Data":"bc30edea639de0ee7f13337036ce5aa547c309cfb539845231d981526fd47c2f"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.187887 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mbprt" event={"ID":"a917dd4e-95f4-4b15-93f3-d7555f527969","Type":"ContainerStarted","Data":"704198ce557f97b0bb839f13942594a930f83158b20f2452360e102a540a00d7"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.189430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" event={"ID":"b72ec18d-92ae-4544-957a-036f8e948b1c","Type":"ContainerStarted","Data":"69f8e52f5ca83e85999b63ec053f1772fa419128e470933a13ff772e7537f606"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.189458 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" event={"ID":"b72ec18d-92ae-4544-957a-036f8e948b1c","Type":"ContainerStarted","Data":"a950c0eafa8ff5134db39ecfcc15a3ca8c68c21c553e95e9c31599c6cd76f4be"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.190520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9ztk4" event={"ID":"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6","Type":"ContainerStarted","Data":"fff4a305e3c45262bc905986f4d628a5e519b19ef28d41510cb7d2d0b4d928a4"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.191635 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" event={"ID":"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b","Type":"ContainerStarted","Data":"7f775ba75490a3ec7ae73c2c96fb5027d6d82f6d2994bacb173c15b768e228b7"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.192592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" event={"ID":"75ac4f31-f970-4342-ac67-8e1354f183e2","Type":"ContainerStarted","Data":"2e0f3cef8b1b9636955b8d30a8b2361047da2bbd763ada241c242189637ce827"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.193751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8dtjd" event={"ID":"e967798d-a0d2-40e4-af66-ba0d04ac8318","Type":"ContainerStarted","Data":"4fcef7e8789191e7a118d58407efae3fd16c7caafe706a872f38e4d2c6fd1ad1"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.194811 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" event={"ID":"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a","Type":"ContainerStarted","Data":"5f9516d3f41c6f786c6daac517161c58c4b3127f8b062a66203adb0cabd60fe0"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.196085 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" event={"ID":"58d04032-dd97-469b-a9e7-de98cd15f688","Type":"ContainerStarted","Data":"3fdbf4f86e950a8728341e711262c1249c09edbf18e6ed7fbb35f0c43c947a0b"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.197288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" event={"ID":"60b9ebac-2fc0-4238-92bb-6d3c25e0c492","Type":"ContainerStarted","Data":"989e762152f382a895b63b2f11973a765d030585ec17e42c6191b918c0840f9e"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.198335 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" event={"ID":"b36e8c79-b5eb-44db-9193-39af9560315e","Type":"ContainerStarted","Data":"b71bd68531c3c0db2bb6b37404ec1c944692556fabc2397e15629a1ce5949884"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.200266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" event={"ID":"2b1426bd-91d8-43d6-8d72-5316200e13c7","Type":"ContainerStarted","Data":"5287a52a329478812ffd637acc04462db72438d31b92a81e08cf61d055e0e782"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.202361 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" event={"ID":"ec41424c-e403-485d-aa92-32c0c41e7238","Type":"ContainerStarted","Data":"f36f789b28ee8d1cef8f31393d3976c346e40195dcaa15b6b3db1981d2f262cc"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.203583 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" event={"ID":"581444db-9870-4a61-a384-c3a96bff71de","Type":"ContainerStarted","Data":"09b087ba0fd346199ab4f7a3809992e28e56aff0c2527d5e18d0bbdf2bfd99c2"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.204797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" event={"ID":"80775159-a100-48e2-a896-ff8c5121cd39","Type":"ContainerStarted","Data":"e8fa3ecfe8dd2fb084d7b6091f62461f19f3e394b9739c4d93a58a48c7ffe776"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.206610 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" event={"ID":"fc3b41fb-9bd1-4f00-9653-8e73a695de87","Type":"ContainerStarted","Data":"99d19e224749878abbc1b313a1e36c55db21e454d353b6bb08076f772cb448d2"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.207976 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.208172 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.708138736 +0000 UTC m=+137.903313088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.208219 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.208904 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.708828828 +0000 UTC m=+137.904003160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.209263 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" event={"ID":"12650400-55e2-4496-a52e-eae7bd0434e9","Type":"ContainerStarted","Data":"d6ce9c9f706ccb71ec8f7d56c0777f6e676230291ef5d0b27405ca9b2f26d638"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.209323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" event={"ID":"12650400-55e2-4496-a52e-eae7bd0434e9","Type":"ContainerStarted","Data":"d95f0895ba8afc0bd838bd202fe5eab2eb75a5ad3dd9a0d6657d46bef5ca331b"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.210442 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kmcbt" event={"ID":"4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419","Type":"ContainerStarted","Data":"6ec5ce655aa53ceca471df82ffee46225cdab58e8463a0ebe0398ec3cd82422f"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.211558 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" event={"ID":"3b4f0840-cc07-4b0a-a96b-5534312b0553","Type":"ContainerStarted","Data":"a9b0273e7dfe00d5c0662cac29fe66508a453c2eb2f5595f8ac1dfa45f0b28d8"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.212758 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" event={"ID":"c5909f8e-1a62-455a-a85a-73d85747e3a7","Type":"ContainerStarted","Data":"982ad2e381092bee947bb39b7ed0b3919dba3e048546234a5ef60a70bec08f20"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.214285 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" event={"ID":"8f750ee8-dda0-4af2-a692-412153a3f80e","Type":"ContainerStarted","Data":"25646bbe81a06dc7b0486908fa8b072d6d5f2af09153fd523577b6f856d9aa74"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.215437 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" event={"ID":"83fd9a6f-d3fc-4e3e-8924-d363bab949eb","Type":"ContainerStarted","Data":"c28a11349da8ef051384b85d40edba47525e8692ce47542ecdab77c038dd938b"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.216540 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" event={"ID":"0dd9da09-12ea-444d-8f15-4e1acbffb8d6","Type":"ContainerStarted","Data":"b0041d344c8f2462f067a55ec76a5c3c9182d1e06df4ce88e263560703373338"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.218769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" event={"ID":"b4979739-3dc4-4820-b52d-ad093d216bd7","Type":"ContainerStarted","Data":"83f8dd0ac0ea7fc921b40e1c7c7826d7e4a18b70655a573990cc1e4d34c73e91"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.219981 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" event={"ID":"01bcfafe-52bc-44c0-813a-fede5ecfdc41","Type":"ContainerStarted","Data":"1cbfdb5a91fea84cb086e68f7edc7b6b87a0da099834ccf25abdb116a03f2027"} Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.221280 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hbqc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.221443 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8hbqc" podUID="3ca46876-8b39-440e-a82f-b6eb424cca00" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.229283 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-knbz8" Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.236845 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wr8n8" podStartSLOduration=119.236821406 podStartE2EDuration="1m59.236821406s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:13.235285109 +0000 UTC m=+137.430459451" watchObservedRunningTime="2025-12-02 10:10:13.236821406 +0000 UTC m=+137.431995708" Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.311654 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.311977 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.81196047 +0000 UTC m=+138.007134772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.413580 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.414560 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:13.914532826 +0000 UTC m=+138.109707318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.509761 4813 patch_prober.go:28] interesting pod/router-default-5444994796-2bgmp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:10:13 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Dec 02 10:10:13 crc kubenswrapper[4813]: [+]process-running ok Dec 02 10:10:13 crc kubenswrapper[4813]: healthz check failed Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.510044 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2bgmp" podUID="f3d932b1-7a66-4020-b200-fb2ae977f7bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.514466 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.514732 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.014717548 +0000 UTC m=+138.209891850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.616859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.617284 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.117258102 +0000 UTC m=+138.312432404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.717722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.717917 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.217886678 +0000 UTC m=+138.413060990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.718194 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.718501 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.218489016 +0000 UTC m=+138.413663318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.818876 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.819094 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.31905565 +0000 UTC m=+138.514229952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.819253 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.819676 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.319662149 +0000 UTC m=+138.514836451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.923256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.923452 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.423431001 +0000 UTC m=+138.618605303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:13 crc kubenswrapper[4813]: I1202 10:10:13.923844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:13 crc kubenswrapper[4813]: E1202 10:10:13.924285 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.424275077 +0000 UTC m=+138.619449379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.024580 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.025004 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.524987005 +0000 UTC m=+138.720161307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.125824 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.127577 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.627563081 +0000 UTC m=+138.822737383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.238255 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.238949 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.738925415 +0000 UTC m=+138.934099717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.305941 4813 generic.go:334] "Generic (PLEG): container finished" podID="75ac4f31-f970-4342-ac67-8e1354f183e2" containerID="aec8c2317cdcbf23557af2175990344de8cbd993df37f762b359b18599668332" exitCode=0 Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.306103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" event={"ID":"75ac4f31-f970-4342-ac67-8e1354f183e2","Type":"ContainerDied","Data":"aec8c2317cdcbf23557af2175990344de8cbd993df37f762b359b18599668332"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.322304 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" event={"ID":"c5909f8e-1a62-455a-a85a-73d85747e3a7","Type":"ContainerStarted","Data":"4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.324380 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.329774 4813 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-g8r9r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.329832 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.345813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.347778 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.847761763 +0000 UTC m=+139.042936155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.368853 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mbprt" event={"ID":"a917dd4e-95f4-4b15-93f3-d7555f527969","Type":"ContainerStarted","Data":"7420419016b7ac20df453e7f9eca0b488eebce4a00469fd71206dd17492eb5b0"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.369129 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.386263 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" event={"ID":"e30a4ae1-71f0-4065-8e7a-e75e2588aeac","Type":"ContainerStarted","Data":"9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.387361 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.390222 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" podStartSLOduration=120.390199744 podStartE2EDuration="2m0.390199744s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.387866883 +0000 UTC m=+138.583041195" watchObservedRunningTime="2025-12-02 10:10:14.390199744 +0000 UTC m=+138.585374046" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.439847 4813 patch_prober.go:28] interesting pod/console-operator-58897d9998-mbprt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.439928 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mbprt" podUID="a917dd4e-95f4-4b15-93f3-d7555f527969" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.441479 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" podStartSLOduration=119.441463276 podStartE2EDuration="1m59.441463276s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.440658752 +0000 UTC m=+138.635833064" watchObservedRunningTime="2025-12-02 10:10:14.441463276 +0000 UTC m=+138.636637588" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.446641 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.448330 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:14.948303096 +0000 UTC m=+139.143477458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.457353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" event={"ID":"b72ec18d-92ae-4544-957a-036f8e948b1c","Type":"ContainerStarted","Data":"0f3d18aefc1bf1fb7d932ccad2701fc6bb6e8809ff9356e426c98d74c619a104"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.509523 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mbprt" podStartSLOduration=120.509456451 podStartE2EDuration="2m0.509456451s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.505287793 +0000 UTC m=+138.700462105" watchObservedRunningTime="2025-12-02 10:10:14.509456451 +0000 UTC m=+138.704630753" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.534669 4813 patch_prober.go:28] interesting pod/router-default-5444994796-2bgmp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:10:14 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Dec 02 10:10:14 crc kubenswrapper[4813]: [+]process-running ok Dec 02 10:10:14 crc kubenswrapper[4813]: healthz check failed Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.534741 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2bgmp" podUID="f3d932b1-7a66-4020-b200-fb2ae977f7bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.537235 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vn2ml" podStartSLOduration=119.537217973 podStartE2EDuration="1m59.537217973s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.536331925 +0000 UTC m=+138.731506237" watchObservedRunningTime="2025-12-02 10:10:14.537217973 +0000 UTC m=+138.732392275" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.546276 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kmcbt" event={"ID":"4d4d1564-e4f3-4b49-ae28-2f4b9a5f4419","Type":"ContainerStarted","Data":"fd258cdbd2e5980c66a93644de54e5e14db9fb50b5071357b740bafa56dc1a1e"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.547982 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.548622 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.048588461 +0000 UTC m=+139.243762763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.561278 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" event={"ID":"ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b","Type":"ContainerStarted","Data":"91d806419169abc240910e0fc3fa79ca69697cb001b251860f3bdcd59e00ab96"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.562333 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.586900 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kmcbt" podStartSLOduration=7.586872995 podStartE2EDuration="7.586872995s" podCreationTimestamp="2025-12-02 10:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.583324716 +0000 UTC m=+138.778499038" watchObservedRunningTime="2025-12-02 10:10:14.586872995 +0000 UTC m=+138.782047297" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.588443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" event={"ID":"98c41744-524c-47d4-b78a-71f53480faba","Type":"ContainerStarted","Data":"59739502babe2bd2d194aa0a741a01902b5d025ae87927f8c957bf2ed60440b7"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.594980 4813 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p8fvw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.595035 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" podUID="ec49ae9a-64d0-4cc7-92b9-4e8bafeedb8b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.639395 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" event={"ID":"b36e8c79-b5eb-44db-9193-39af9560315e","Type":"ContainerStarted","Data":"3aca2924340d6e5bbd48f1d9843ddf7d71034fa32d74d27d5f5acf2c3e385af9"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.640612 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.649329 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.649821 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.149800705 +0000 UTC m=+139.344975007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.651386 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" event={"ID":"80775159-a100-48e2-a896-ff8c5121cd39","Type":"ContainerStarted","Data":"d66694b79fa1a119f644e9950cbfec2e440c608059c63440dad649e6464159d5"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.662025 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" podStartSLOduration=119.662000369 podStartE2EDuration="1m59.662000369s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.624839949 +0000 UTC m=+138.820014261" watchObservedRunningTime="2025-12-02 10:10:14.662000369 +0000 UTC m=+138.857174671" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.663434 4813 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lxq5m container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.663484 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" podUID="b36e8c79-b5eb-44db-9193-39af9560315e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.688059 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8dtjd" event={"ID":"e967798d-a0d2-40e4-af66-ba0d04ac8318","Type":"ContainerStarted","Data":"f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.730723 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7nd9n" podStartSLOduration=120.730698056 podStartE2EDuration="2m0.730698056s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.663628229 +0000 UTC m=+138.858802541" watchObservedRunningTime="2025-12-02 10:10:14.730698056 +0000 UTC m=+138.925872368" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.733835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" event={"ID":"83fd9a6f-d3fc-4e3e-8924-d363bab949eb","Type":"ContainerStarted","Data":"07e8800dc6fda56f7b28d9b8d98059ab813236b721d83d2343c23183a96fe84b"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.762829 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.765733 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.265718709 +0000 UTC m=+139.460893011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.785061 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tlm8g" podStartSLOduration=120.785036222 podStartE2EDuration="2m0.785036222s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.733198442 +0000 UTC m=+138.928372744" watchObservedRunningTime="2025-12-02 10:10:14.785036222 +0000 UTC m=+138.980210524" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.785966 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" podStartSLOduration=119.78595794 podStartE2EDuration="1m59.78595794s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.784587248 +0000 UTC m=+138.979761550" watchObservedRunningTime="2025-12-02 10:10:14.78595794 +0000 UTC m=+138.981132242" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.813845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" event={"ID":"fc3b41fb-9bd1-4f00-9653-8e73a695de87","Type":"ContainerStarted","Data":"4d13669385573210bbf0dd3c8e69a1f69f7914d72b8abc471ebc59f08f0b4997"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.816320 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" event={"ID":"ec41424c-e403-485d-aa92-32c0c41e7238","Type":"ContainerStarted","Data":"e5d5e8e1e619d15222aa3baabff9552377ee414a626e1620c1457160ca4d1abc"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.839551 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8dtjd" podStartSLOduration=120.839520623 podStartE2EDuration="2m0.839520623s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.836683536 +0000 UTC m=+139.031857838" watchObservedRunningTime="2025-12-02 10:10:14.839520623 +0000 UTC m=+139.034694925" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.843642 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.867717 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.867946 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.367923444 +0000 UTC m=+139.563097746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.868425 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.871121 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.370958657 +0000 UTC m=+139.566132959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.881403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" event={"ID":"03ddc93f-c104-482e-a615-1f6ce52c62b8","Type":"ContainerStarted","Data":"2c85e95c5e5841e150d6e640e24feac3189582149103b30ec543428b950b2b5a"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.882547 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.892818 4813 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tt449 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.893001 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.918825 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5jfdj" podStartSLOduration=119.918799384 podStartE2EDuration="1m59.918799384s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:14.865806449 +0000 UTC m=+139.060980751" watchObservedRunningTime="2025-12-02 10:10:14.918799384 +0000 UTC m=+139.113973686" Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.931915 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" event={"ID":"04f208cb-9296-4a48-8f2c-d5589dad97a1","Type":"ContainerStarted","Data":"42351e31befc67b1c2e6d065ef6f99e901bceba66eb09067279ad41b4e4655db"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.971282 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" event={"ID":"8f750ee8-dda0-4af2-a692-412153a3f80e","Type":"ContainerStarted","Data":"cad09670e63b98e1453ad70fb17b7b488a2ad825aa11e0fdd1f3250c8ab1ef83"} Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.976506 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:14 crc kubenswrapper[4813]: E1202 10:10:14.976604 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.476579746 +0000 UTC m=+139.671754048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:14 crc kubenswrapper[4813]: I1202 10:10:14.987401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.006889 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.506867894 +0000 UTC m=+139.702042206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.049463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" event={"ID":"581444db-9870-4a61-a384-c3a96bff71de","Type":"ContainerStarted","Data":"b0137815698fbca6f94036e32e9d7f1f28da8f871fb3bf188cc4f35c325f326a"} Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.051375 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zr87q" podStartSLOduration=121.051354969 podStartE2EDuration="2m1.051354969s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:15.050940036 +0000 UTC m=+139.246114348" watchObservedRunningTime="2025-12-02 10:10:15.051354969 +0000 UTC m=+139.246529261" Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.052114 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" podStartSLOduration=121.052109492 podStartE2EDuration="2m1.052109492s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:15.007150403 +0000 UTC m=+139.202324725" watchObservedRunningTime="2025-12-02 10:10:15.052109492 +0000 UTC m=+139.247283794" Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.073455 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" podStartSLOduration=120.073433455 podStartE2EDuration="2m0.073433455s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:15.072016032 +0000 UTC m=+139.267190334" watchObservedRunningTime="2025-12-02 10:10:15.073433455 +0000 UTC m=+139.268607757" Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.079353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" event={"ID":"01bcfafe-52bc-44c0-813a-fede5ecfdc41","Type":"ContainerStarted","Data":"af021677b783d5254735d639e01e06c296745f985b2f643dbb152dcee2f1cb85"} Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.088977 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.090516 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.590497079 +0000 UTC m=+139.785671381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.095908 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rnfw" Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.128347 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" podStartSLOduration=120.128315059 podStartE2EDuration="2m0.128315059s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:15.114978479 +0000 UTC m=+139.310152781" watchObservedRunningTime="2025-12-02 10:10:15.128315059 +0000 UTC m=+139.323489371" Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.190681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.191051 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.691038982 +0000 UTC m=+139.886213274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.192019 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lq6rn" podStartSLOduration=120.192005191 podStartE2EDuration="2m0.192005191s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:15.141593056 +0000 UTC m=+139.336767358" watchObservedRunningTime="2025-12-02 10:10:15.192005191 +0000 UTC m=+139.387179493" Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.295647 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.295789 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.795763103 +0000 UTC m=+139.990937405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.296951 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.297499 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.797482876 +0000 UTC m=+139.992657178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.398052 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.398262 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.898226335 +0000 UTC m=+140.093400637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.398376 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.398825 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.898813903 +0000 UTC m=+140.093988215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.498984 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.499274 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.999229882 +0000 UTC m=+140.194404184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.499471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.499903 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:15.999884872 +0000 UTC m=+140.195059184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.507059 4813 patch_prober.go:28] interesting pod/router-default-5444994796-2bgmp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:10:15 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Dec 02 10:10:15 crc kubenswrapper[4813]: [+]process-running ok Dec 02 10:10:15 crc kubenswrapper[4813]: healthz check failed Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.507167 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2bgmp" podUID="f3d932b1-7a66-4020-b200-fb2ae977f7bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.600305 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.600589 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.100550019 +0000 UTC m=+140.295724331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.600996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.601418 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.101406806 +0000 UTC m=+140.296581188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.702578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.702984 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.20296456 +0000 UTC m=+140.398138862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.804533 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.805046 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.305023309 +0000 UTC m=+140.500197611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.905685 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.905899 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.405866172 +0000 UTC m=+140.601040484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:15 crc kubenswrapper[4813]: I1202 10:10:15.905969 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:15 crc kubenswrapper[4813]: E1202 10:10:15.906380 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.406369897 +0000 UTC m=+140.601544259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.006909 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.007143 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.507108936 +0000 UTC m=+140.702283248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.007395 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.007780 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.507772557 +0000 UTC m=+140.702946859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.107613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7cqw2" event={"ID":"581444db-9870-4a61-a384-c3a96bff71de","Type":"ContainerStarted","Data":"d050af74044778c4b747680250a9e1d428f2347db1ab6eb2cac1b24946de3c5c"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.108066 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.108248 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.608227027 +0000 UTC m=+140.803401329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.108444 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.108759 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.608752083 +0000 UTC m=+140.803926385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.112501 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" event={"ID":"60b9ebac-2fc0-4238-92bb-6d3c25e0c492","Type":"ContainerStarted","Data":"098e37c349b8209b876e947f82760e76641b54ae088a50f6c5c15cbd642bc537"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.112756 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.115427 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" event={"ID":"2b1426bd-91d8-43d6-8d72-5316200e13c7","Type":"ContainerStarted","Data":"c7efbbe179efd49b921d59844255f9cd60eb2b214e688b5d9b43efa7cc291e03"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.117879 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" event={"ID":"bd2a4ac4-4417-42fd-8165-1be41729d64f","Type":"ContainerStarted","Data":"ea8e8a9fd2053affefa832d5aa5d416c4f867347db33eac771b06e2b6f588fce"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.124199 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" event={"ID":"75ac4f31-f970-4342-ac67-8e1354f183e2","Type":"ContainerStarted","Data":"9f9220ecf120d35b183c90b1735b93eef8cbe09373ee162841cf19f5671a7165"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.135290 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" event={"ID":"5967be2f-1cd7-4fbf-9482-92dd8689abdf","Type":"ContainerStarted","Data":"4e0d491279fc743d10117cb3afc4783bf5ccf1cf05cb19019bd22fa73c3e5422"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.141243 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" event={"ID":"8f750ee8-dda0-4af2-a692-412153a3f80e","Type":"ContainerStarted","Data":"cd0123c72ccb876b9e6ff2cf0f3146e78d089682550efc769ef0f9f3300972d9"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.145446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9ztk4" event={"ID":"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6","Type":"ContainerStarted","Data":"51b21c2a975673d869b2014178f72e0ff1ec0eba02cafb00f6e1e35cafe774fb"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.145602 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9ztk4" event={"ID":"84cafe57-4b2e-4a86-bb75-758c6cf1e0f6","Type":"ContainerStarted","Data":"4ce98707854b96a52dc808d4b69f3f61d9346ddb13980d566fc787426eb896e7"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.145626 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.149189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" event={"ID":"12650400-55e2-4496-a52e-eae7bd0434e9","Type":"ContainerStarted","Data":"11842dee8640dca4be2ab9ed6769041002677c0e7249d2dabb7ad9e1bfd7fbd3"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.154672 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" event={"ID":"3b4f0840-cc07-4b0a-a96b-5534312b0553","Type":"ContainerStarted","Data":"050b3530f30743cd6fe68e22f775a11048a2bc094e789aa8f8d15b91ceb36cdf"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.154723 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" event={"ID":"3b4f0840-cc07-4b0a-a96b-5534312b0553","Type":"ContainerStarted","Data":"0d9086ab51c27984e6792d753257b9695c71c3c3a525edcf2ee2eaafd850b715"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.156880 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" event={"ID":"0dd9da09-12ea-444d-8f15-4e1acbffb8d6","Type":"ContainerStarted","Data":"6ae56a48bf9ee63f5ff3d5b7a297fb57eb59ea69deb5fdf62c8db31108de9342"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.158090 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" event={"ID":"8c69e76a-7d95-4c70-8c9b-ab53e7c5a95a","Type":"ContainerStarted","Data":"a3e428fe15d87346f2beb9e88fb97d09ef87fd9419d29f527c9ad3b0190eab51"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.174980 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" event={"ID":"fc3b41fb-9bd1-4f00-9653-8e73a695de87","Type":"ContainerStarted","Data":"17204e2273b9ec483cda6c3b9b9cdae00267aac93389cb7cc23fa9d542587c23"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.175167 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.183560 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.187317 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" event={"ID":"27186c4d-b911-4ef8-8e86-082ddf35d6b7","Type":"ContainerStarted","Data":"404b43919707d421d39f2083eda013425d939bf6fb8207b73f8260970753332d"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.212195 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.213389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" event={"ID":"58d04032-dd97-469b-a9e7-de98cd15f688","Type":"ContainerStarted","Data":"ba5fd8f5396ab71785a43c35ab23431f3fda0031128c3bab9456832923ca26ec"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.213760 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" event={"ID":"58d04032-dd97-469b-a9e7-de98cd15f688","Type":"ContainerStarted","Data":"a8697fb05de0c497a734227afc17c8015cf42109a8893252d98840cdc6e923d4"} Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.214359 4813 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tt449 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.214435 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.215283 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.715264069 +0000 UTC m=+140.910438361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.224899 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lxq5m" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.231172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mbprt" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.244844 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.317127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.319393 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.819375132 +0000 UTC m=+141.014549534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.425646 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.426030 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:16.926005362 +0000 UTC m=+141.121179664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.510368 4813 patch_prober.go:28] interesting pod/router-default-5444994796-2bgmp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 10:10:16 crc kubenswrapper[4813]: [+]has-synced ok Dec 02 10:10:16 crc kubenswrapper[4813]: [+]process-running ok Dec 02 10:10:16 crc kubenswrapper[4813]: healthz check failed Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.510441 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2bgmp" podUID="f3d932b1-7a66-4020-b200-fb2ae977f7bf" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.527610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.528089 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.028055821 +0000 UTC m=+141.223230123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.586462 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" podStartSLOduration=121.586441291 podStartE2EDuration="2m1.586441291s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.584014177 +0000 UTC m=+140.779188479" watchObservedRunningTime="2025-12-02 10:10:16.586441291 +0000 UTC m=+140.781615593" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.628294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.628811 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.12879136 +0000 UTC m=+141.323965662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.632988 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rvm28" podStartSLOduration=121.632966158 podStartE2EDuration="2m1.632966158s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.63238475 +0000 UTC m=+140.827559062" watchObservedRunningTime="2025-12-02 10:10:16.632966158 +0000 UTC m=+140.828140460" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.679268 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-h2nx2" podStartSLOduration=121.679250276 podStartE2EDuration="2m1.679250276s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.677473742 +0000 UTC m=+140.872648044" watchObservedRunningTime="2025-12-02 10:10:16.679250276 +0000 UTC m=+140.874424578" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.689314 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p8fvw" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.728235 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-f2zd8" podStartSLOduration=122.728215858 podStartE2EDuration="2m2.728215858s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.726462444 +0000 UTC m=+140.921636736" watchObservedRunningTime="2025-12-02 10:10:16.728215858 +0000 UTC m=+140.923390160" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.729618 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.729925 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.22991199 +0000 UTC m=+141.425086292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.787286 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2p9n" podStartSLOduration=121.787261488 podStartE2EDuration="2m1.787261488s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.786903348 +0000 UTC m=+140.982077650" watchObservedRunningTime="2025-12-02 10:10:16.787261488 +0000 UTC m=+140.982435800" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.821918 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" podStartSLOduration=121.821899021 podStartE2EDuration="2m1.821899021s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.821579011 +0000 UTC m=+141.016753313" watchObservedRunningTime="2025-12-02 10:10:16.821899021 +0000 UTC m=+141.017073323" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.834955 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.838473 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.338454548 +0000 UTC m=+141.533628850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.847635 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.848169 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.348155336 +0000 UTC m=+141.543329638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.903751 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5wvlc" podStartSLOduration=122.90372715 podStartE2EDuration="2m2.90372715s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.875434242 +0000 UTC m=+141.070608564" watchObservedRunningTime="2025-12-02 10:10:16.90372715 +0000 UTC m=+141.098901442" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.916874 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sklx8" podStartSLOduration=122.916855822 podStartE2EDuration="2m2.916855822s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.907282859 +0000 UTC m=+141.102457161" watchObservedRunningTime="2025-12-02 10:10:16.916855822 +0000 UTC m=+141.112030124" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.941748 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b4frq" podStartSLOduration=121.941728065 podStartE2EDuration="2m1.941728065s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.937826025 +0000 UTC m=+141.133000327" watchObservedRunningTime="2025-12-02 10:10:16.941728065 +0000 UTC m=+141.136902367" Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.951921 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.952251 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.452216727 +0000 UTC m=+141.647391029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.952617 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:16 crc kubenswrapper[4813]: E1202 10:10:16.953157 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.453148245 +0000 UTC m=+141.648322547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:16 crc kubenswrapper[4813]: I1202 10:10:16.975543 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9ztk4" podStartSLOduration=9.975513661 podStartE2EDuration="9.975513661s" podCreationTimestamp="2025-12-02 10:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:16.969319591 +0000 UTC m=+141.164493913" watchObservedRunningTime="2025-12-02 10:10:16.975513661 +0000 UTC m=+141.170687963" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.001835 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jxfgc" podStartSLOduration=122.001814138 podStartE2EDuration="2m2.001814138s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:17.001171188 +0000 UTC m=+141.196345490" watchObservedRunningTime="2025-12-02 10:10:17.001814138 +0000 UTC m=+141.196988440" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.038531 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x82gd" podStartSLOduration=123.038508213 podStartE2EDuration="2m3.038508213s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:17.029193047 +0000 UTC m=+141.224367349" watchObservedRunningTime="2025-12-02 10:10:17.038508213 +0000 UTC m=+141.233682515" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.039462 4813 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.057569 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:17 crc kubenswrapper[4813]: E1202 10:10:17.058065 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.558046842 +0000 UTC m=+141.753221144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.089290 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-22vj7" podStartSLOduration=122.089267979 podStartE2EDuration="2m2.089267979s" podCreationTimestamp="2025-12-02 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:17.080438919 +0000 UTC m=+141.275613221" watchObservedRunningTime="2025-12-02 10:10:17.089267979 +0000 UTC m=+141.284442271" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.159117 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:17 crc kubenswrapper[4813]: E1202 10:10:17.159468 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 10:10:17.659454792 +0000 UTC m=+141.854629094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zkbcp" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.220145 4813 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-02T10:10:17.039489423Z","Handler":null,"Name":""} Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.257321 4813 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.257362 4813 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.261831 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.266156 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" event={"ID":"0dd9da09-12ea-444d-8f15-4e1acbffb8d6","Type":"ContainerStarted","Data":"7c989b28f858644e78f31d871c48d096c56b2e6a826a02701c82453ba5c19924"} Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.266360 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" event={"ID":"0dd9da09-12ea-444d-8f15-4e1acbffb8d6","Type":"ContainerStarted","Data":"bfe8604c9f00f44d60e936c024c57f2570cac10258fcd066ff83c90553976fc9"} Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.279606 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.279809 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.366457 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.380460 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.380506 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.458316 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zkbcp\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.511635 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.512475 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.517168 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2bgmp" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.605662 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.676725 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ljxzg"] Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.678830 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.689660 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.713222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ljxzg"] Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.782835 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-catalog-content\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.782900 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmkx\" (UniqueName: \"kubernetes.io/projected/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-kube-api-access-vcmkx\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.782978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-utilities\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.857806 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4c78n"] Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.858852 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.862869 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.870852 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4c78n"] Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.884096 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-catalog-content\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.884157 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmkx\" (UniqueName: \"kubernetes.io/projected/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-kube-api-access-vcmkx\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.884214 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-utilities\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.885279 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-utilities\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.886212 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-catalog-content\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.912503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmkx\" (UniqueName: \"kubernetes.io/projected/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-kube-api-access-vcmkx\") pod \"certified-operators-ljxzg\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.942318 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zkbcp"] Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.985182 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-utilities\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.985286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-catalog-content\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:17 crc kubenswrapper[4813]: I1202 10:10:17.985328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6z6f\" (UniqueName: \"kubernetes.io/projected/dc26ee61-a67c-4200-8cd7-4ca46e748fea-kube-api-access-j6z6f\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.022024 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.061819 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7tbf8"] Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.063258 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.077921 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.079106 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7tbf8"] Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.086864 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-catalog-content\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.086949 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6z6f\" (UniqueName: \"kubernetes.io/projected/dc26ee61-a67c-4200-8cd7-4ca46e748fea-kube-api-access-j6z6f\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.087043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-utilities\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.087589 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-utilities\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.087868 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-catalog-content\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.116196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6z6f\" (UniqueName: \"kubernetes.io/projected/dc26ee61-a67c-4200-8cd7-4ca46e748fea-kube-api-access-j6z6f\") pod \"community-operators-4c78n\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.190743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-catalog-content\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.191416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f8f9\" (UniqueName: \"kubernetes.io/projected/40e81749-c104-4999-9e79-eea19913cbc2-kube-api-access-9f8f9\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.191249 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.191614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-utilities\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.257053 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mkq9l"] Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.258253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.269228 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkq9l"] Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.287046 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" event={"ID":"bd3bb4e8-6c34-42b4-b041-54de4c5d219b","Type":"ContainerStarted","Data":"d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817"} Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.287125 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" event={"ID":"bd3bb4e8-6c34-42b4-b041-54de4c5d219b","Type":"ContainerStarted","Data":"2e3f961dcf44d90aa93bc034f34f04947234ef242b8a863853673a1d5fe7490f"} Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.287173 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.290946 4813 generic.go:334] "Generic (PLEG): container finished" podID="ec41424c-e403-485d-aa92-32c0c41e7238" containerID="e5d5e8e1e619d15222aa3baabff9552377ee414a626e1620c1457160ca4d1abc" exitCode=0 Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.291018 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" event={"ID":"ec41424c-e403-485d-aa92-32c0c41e7238","Type":"ContainerDied","Data":"e5d5e8e1e619d15222aa3baabff9552377ee414a626e1620c1457160ca4d1abc"} Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.292518 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-catalog-content\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.292542 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f8f9\" (UniqueName: \"kubernetes.io/projected/40e81749-c104-4999-9e79-eea19913cbc2-kube-api-access-9f8f9\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.292575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-utilities\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.293310 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-utilities\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.298492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-catalog-content\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.314308 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" event={"ID":"0dd9da09-12ea-444d-8f15-4e1acbffb8d6","Type":"ContainerStarted","Data":"e73b045f96ce1db1b97f13a67b80f48464aceec86ee930b7ce46a6bc1ea1f90b"} Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.316392 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" podStartSLOduration=124.316371508 podStartE2EDuration="2m4.316371508s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:18.313022515 +0000 UTC m=+142.508196837" watchObservedRunningTime="2025-12-02 10:10:18.316371508 +0000 UTC m=+142.511545810" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.319732 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f8f9\" (UniqueName: \"kubernetes.io/projected/40e81749-c104-4999-9e79-eea19913cbc2-kube-api-access-9f8f9\") pod \"certified-operators-7tbf8\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.346431 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ljxzg"] Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.394175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-utilities\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.394252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-catalog-content\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.394444 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khnz\" (UniqueName: \"kubernetes.io/projected/fbd02c03-797f-44fd-87f3-465e9198c4e8-kube-api-access-5khnz\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.395429 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.496452 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5khnz\" (UniqueName: \"kubernetes.io/projected/fbd02c03-797f-44fd-87f3-465e9198c4e8-kube-api-access-5khnz\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.496627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-utilities\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.496681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-catalog-content\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.497563 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-catalog-content\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.497647 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-utilities\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.524930 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5khnz\" (UniqueName: \"kubernetes.io/projected/fbd02c03-797f-44fd-87f3-465e9198c4e8-kube-api-access-5khnz\") pod \"community-operators-mkq9l\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.525430 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lzjn8" podStartSLOduration=11.525416298 podStartE2EDuration="11.525416298s" podCreationTimestamp="2025-12-02 10:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:18.369750315 +0000 UTC m=+142.564924617" watchObservedRunningTime="2025-12-02 10:10:18.525416298 +0000 UTC m=+142.720590600" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.530302 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4c78n"] Dec 02 10:10:18 crc kubenswrapper[4813]: W1202 10:10:18.543266 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc26ee61_a67c_4200_8cd7_4ca46e748fea.slice/crio-2f6abb6751259b0488f92b2b97797df4bf8aab7df1d4e2a8edc22f597e4609f2 WatchSource:0}: Error finding container 2f6abb6751259b0488f92b2b97797df4bf8aab7df1d4e2a8edc22f597e4609f2: Status 404 returned error can't find the container with id 2f6abb6751259b0488f92b2b97797df4bf8aab7df1d4e2a8edc22f597e4609f2 Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.601710 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.683446 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7tbf8"] Dec 02 10:10:18 crc kubenswrapper[4813]: I1202 10:10:18.829301 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkq9l"] Dec 02 10:10:18 crc kubenswrapper[4813]: W1202 10:10:18.896148 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbd02c03_797f_44fd_87f3_465e9198c4e8.slice/crio-78a1afb4b6d1792d1a2d803abcf670e867cac11097a1085b4de6e636a4e18c5f WatchSource:0}: Error finding container 78a1afb4b6d1792d1a2d803abcf670e867cac11097a1085b4de6e636a4e18c5f: Status 404 returned error can't find the container with id 78a1afb4b6d1792d1a2d803abcf670e867cac11097a1085b4de6e636a4e18c5f Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.322782 4813 generic.go:334] "Generic (PLEG): container finished" podID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerID="f0e4293d88063cc21d1944d400c18a60bf1626b658ef37498eee868b9b893555" exitCode=0 Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.322896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkq9l" event={"ID":"fbd02c03-797f-44fd-87f3-465e9198c4e8","Type":"ContainerDied","Data":"f0e4293d88063cc21d1944d400c18a60bf1626b658ef37498eee868b9b893555"} Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.322937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkq9l" event={"ID":"fbd02c03-797f-44fd-87f3-465e9198c4e8","Type":"ContainerStarted","Data":"78a1afb4b6d1792d1a2d803abcf670e867cac11097a1085b4de6e636a4e18c5f"} Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.325109 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.328531 4813 generic.go:334] "Generic (PLEG): container finished" podID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerID="78d283f491e3443d0855192b51e9d883d79f716dec4790f4c487ead9995133cb" exitCode=0 Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.328610 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c78n" event={"ID":"dc26ee61-a67c-4200-8cd7-4ca46e748fea","Type":"ContainerDied","Data":"78d283f491e3443d0855192b51e9d883d79f716dec4790f4c487ead9995133cb"} Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.328651 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c78n" event={"ID":"dc26ee61-a67c-4200-8cd7-4ca46e748fea","Type":"ContainerStarted","Data":"2f6abb6751259b0488f92b2b97797df4bf8aab7df1d4e2a8edc22f597e4609f2"} Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.335978 4813 generic.go:334] "Generic (PLEG): container finished" podID="40e81749-c104-4999-9e79-eea19913cbc2" containerID="83fe8cb8e58860cde3d8cb851913f25afea91e0ecf8be5ab963f55ff3ebba60d" exitCode=0 Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.336030 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tbf8" event={"ID":"40e81749-c104-4999-9e79-eea19913cbc2","Type":"ContainerDied","Data":"83fe8cb8e58860cde3d8cb851913f25afea91e0ecf8be5ab963f55ff3ebba60d"} Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.336109 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tbf8" event={"ID":"40e81749-c104-4999-9e79-eea19913cbc2","Type":"ContainerStarted","Data":"14f5862bbab5c9b1d8936dbcf414c637811879c213691d0789e7bd45f0210bc0"} Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.338583 4813 generic.go:334] "Generic (PLEG): container finished" podID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerID="ffd27258a73e766379e60732cc6b1ab49855c5a9c22f377015d62110a178dca3" exitCode=0 Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.338744 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljxzg" event={"ID":"72dfffc2-b16b-47e4-9e6c-1c5562e48db0","Type":"ContainerDied","Data":"ffd27258a73e766379e60732cc6b1ab49855c5a9c22f377015d62110a178dca3"} Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.338779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljxzg" event={"ID":"72dfffc2-b16b-47e4-9e6c-1c5562e48db0","Type":"ContainerStarted","Data":"8bcc86ae0081a1db42d3cf7ecfbffd22cee493e4c9ff801b1cc0899826f88d23"} Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.363296 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hbqc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.363400 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8hbqc" podUID="3ca46876-8b39-440e-a82f-b6eb424cca00" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.367977 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-8hbqc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.368089 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8hbqc" podUID="3ca46876-8b39-440e-a82f-b6eb424cca00" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.547482 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.608818 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec41424c-e403-485d-aa92-32c0c41e7238-secret-volume\") pod \"ec41424c-e403-485d-aa92-32c0c41e7238\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.608986 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcls4\" (UniqueName: \"kubernetes.io/projected/ec41424c-e403-485d-aa92-32c0c41e7238-kube-api-access-hcls4\") pod \"ec41424c-e403-485d-aa92-32c0c41e7238\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.609021 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec41424c-e403-485d-aa92-32c0c41e7238-config-volume\") pod \"ec41424c-e403-485d-aa92-32c0c41e7238\" (UID: \"ec41424c-e403-485d-aa92-32c0c41e7238\") " Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.609834 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec41424c-e403-485d-aa92-32c0c41e7238-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec41424c-e403-485d-aa92-32c0c41e7238" (UID: "ec41424c-e403-485d-aa92-32c0c41e7238"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.615052 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec41424c-e403-485d-aa92-32c0c41e7238-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec41424c-e403-485d-aa92-32c0c41e7238" (UID: "ec41424c-e403-485d-aa92-32c0c41e7238"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.625441 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec41424c-e403-485d-aa92-32c0c41e7238-kube-api-access-hcls4" (OuterVolumeSpecName: "kube-api-access-hcls4") pod "ec41424c-e403-485d-aa92-32c0c41e7238" (UID: "ec41424c-e403-485d-aa92-32c0c41e7238"). InnerVolumeSpecName "kube-api-access-hcls4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.654420 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-csjp9"] Dec 02 10:10:19 crc kubenswrapper[4813]: E1202 10:10:19.654658 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec41424c-e403-485d-aa92-32c0c41e7238" containerName="collect-profiles" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.654675 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec41424c-e403-485d-aa92-32c0c41e7238" containerName="collect-profiles" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.654792 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec41424c-e403-485d-aa92-32c0c41e7238" containerName="collect-profiles" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.655767 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.658179 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.671789 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-csjp9"] Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.710782 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-utilities\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.710885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-catalog-content\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.710950 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwvt\" (UniqueName: \"kubernetes.io/projected/64b4cb5e-0d3b-437c-8287-599558fd972b-kube-api-access-ctwvt\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.711152 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcls4\" (UniqueName: \"kubernetes.io/projected/ec41424c-e403-485d-aa92-32c0c41e7238-kube-api-access-hcls4\") on node \"crc\" DevicePath \"\"" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.711171 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec41424c-e403-485d-aa92-32c0c41e7238-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.711183 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec41424c-e403-485d-aa92-32c0c41e7238-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.815523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-utilities\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.815619 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-catalog-content\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.815673 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctwvt\" (UniqueName: \"kubernetes.io/projected/64b4cb5e-0d3b-437c-8287-599558fd972b-kube-api-access-ctwvt\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.816226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-catalog-content\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.816226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-utilities\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.832631 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctwvt\" (UniqueName: \"kubernetes.io/projected/64b4cb5e-0d3b-437c-8287-599558fd972b-kube-api-access-ctwvt\") pod \"redhat-marketplace-csjp9\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.973833 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.997951 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:19 crc kubenswrapper[4813]: I1202 10:10:19.998187 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.001113 4813 patch_prober.go:28] interesting pod/console-f9d7485db-8dtjd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.001280 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8dtjd" podUID="e967798d-a0d2-40e4-af66-ba0d04ac8318" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.056333 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r2rvt"] Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.057637 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.069826 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2rvt"] Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.188332 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-csjp9"] Dec 02 10:10:20 crc kubenswrapper[4813]: W1202 10:10:20.204389 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64b4cb5e_0d3b_437c_8287_599558fd972b.slice/crio-d5b4431a4203c56ffd887eae35827fe9f38b2df79fa2684228387e0cca052bcf WatchSource:0}: Error finding container d5b4431a4203c56ffd887eae35827fe9f38b2df79fa2684228387e0cca052bcf: Status 404 returned error can't find the container with id d5b4431a4203c56ffd887eae35827fe9f38b2df79fa2684228387e0cca052bcf Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.222279 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkwwd\" (UniqueName: \"kubernetes.io/projected/8988c99f-2949-423b-9fc7-406be45a14ff-kube-api-access-hkwwd\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.222368 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-catalog-content\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.222394 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-utilities\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.268057 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.268221 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.274415 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.325212 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkwwd\" (UniqueName: \"kubernetes.io/projected/8988c99f-2949-423b-9fc7-406be45a14ff-kube-api-access-hkwwd\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.325310 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-utilities\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.325343 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-catalog-content\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.326174 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-catalog-content\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.326431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-utilities\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.349349 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" event={"ID":"ec41424c-e403-485d-aa92-32c0c41e7238","Type":"ContainerDied","Data":"f36f789b28ee8d1cef8f31393d3976c346e40195dcaa15b6b3db1981d2f262cc"} Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.349373 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.349408 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36f789b28ee8d1cef8f31393d3976c346e40195dcaa15b6b3db1981d2f262cc" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.354399 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkwwd\" (UniqueName: \"kubernetes.io/projected/8988c99f-2949-423b-9fc7-406be45a14ff-kube-api-access-hkwwd\") pod \"redhat-marketplace-r2rvt\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.363313 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csjp9" event={"ID":"64b4cb5e-0d3b-437c-8287-599558fd972b","Type":"ContainerStarted","Data":"d5b4431a4203c56ffd887eae35827fe9f38b2df79fa2684228387e0cca052bcf"} Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.376246 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh74d" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.387290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.628985 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.630383 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.636945 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.637287 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.637304 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.637442 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.652151 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.746908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.747005 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.747381 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.779290 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.860847 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g2t6q"] Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.862361 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.866775 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.870507 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2t6q"] Dec 02 10:10:20 crc kubenswrapper[4813]: I1202 10:10:20.968609 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.050926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-catalog-content\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.051286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jb4l\" (UniqueName: \"kubernetes.io/projected/44363502-e734-4d2e-8f4b-eec2442afe63-kube-api-access-2jb4l\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.051353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-utilities\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.065603 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2rvt"] Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.153357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-catalog-content\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.153412 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jb4l\" (UniqueName: \"kubernetes.io/projected/44363502-e734-4d2e-8f4b-eec2442afe63-kube-api-access-2jb4l\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.153443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-utilities\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.153922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-utilities\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.154164 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-catalog-content\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.178245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jb4l\" (UniqueName: \"kubernetes.io/projected/44363502-e734-4d2e-8f4b-eec2442afe63-kube-api-access-2jb4l\") pod \"redhat-operators-g2t6q\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.199742 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.273120 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ffq5"] Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.278681 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.285571 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ffq5"] Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.325947 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 10:10:21 crc kubenswrapper[4813]: W1202 10:10:21.361540 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc513235d_3d00_4ce4_a879_ed7f3b32e81e.slice/crio-1f52231b6d842d9225854f29c882c0680f55a778b3bdf3386efc066a476e2e7b WatchSource:0}: Error finding container 1f52231b6d842d9225854f29c882c0680f55a778b3bdf3386efc066a476e2e7b: Status 404 returned error can't find the container with id 1f52231b6d842d9225854f29c882c0680f55a778b3bdf3386efc066a476e2e7b Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.428224 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2rvt" event={"ID":"8988c99f-2949-423b-9fc7-406be45a14ff","Type":"ContainerStarted","Data":"f5392d777fe4c37d50fe4d0a9ed2a2783e463cdbb9f313c1a772165bfb846959"} Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.436329 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c513235d-3d00-4ce4-a879-ed7f3b32e81e","Type":"ContainerStarted","Data":"1f52231b6d842d9225854f29c882c0680f55a778b3bdf3386efc066a476e2e7b"} Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.448205 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csjp9" event={"ID":"64b4cb5e-0d3b-437c-8287-599558fd972b","Type":"ContainerDied","Data":"57820e3607df661497a6686657b33e000e2eafb8d070536b3b808c7ac6384b95"} Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.455120 4813 generic.go:334] "Generic (PLEG): container finished" podID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerID="57820e3607df661497a6686657b33e000e2eafb8d070536b3b808c7ac6384b95" exitCode=0 Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.463664 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ntb\" (UniqueName: \"kubernetes.io/projected/526e97b9-958b-4fc1-859b-5b0c10d093c5-kube-api-access-d5ntb\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.463716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-utilities\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.464572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-catalog-content\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.573062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-catalog-content\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.573305 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ntb\" (UniqueName: \"kubernetes.io/projected/526e97b9-958b-4fc1-859b-5b0c10d093c5-kube-api-access-d5ntb\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.573349 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-utilities\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.573999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-catalog-content\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.588386 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-utilities\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.601724 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ntb\" (UniqueName: \"kubernetes.io/projected/526e97b9-958b-4fc1-859b-5b0c10d093c5-kube-api-access-d5ntb\") pod \"redhat-operators-4ffq5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.603211 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.779779 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2t6q"] Dec 02 10:10:21 crc kubenswrapper[4813]: W1202 10:10:21.815933 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44363502_e734_4d2e_8f4b_eec2442afe63.slice/crio-d8e436d4d2dbd5b34d12ebc7209418d6174ffc8f29513335570ee43914a4eb02 WatchSource:0}: Error finding container d8e436d4d2dbd5b34d12ebc7209418d6174ffc8f29513335570ee43914a4eb02: Status 404 returned error can't find the container with id d8e436d4d2dbd5b34d12ebc7209418d6174ffc8f29513335570ee43914a4eb02 Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.881771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.881891 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.881937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.881967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.882922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.887340 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.887723 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.889376 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.926648 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.927619 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.930582 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.930769 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.952625 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 10:10:21 crc kubenswrapper[4813]: I1202 10:10:21.985250 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ffq5"] Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.021849 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.036064 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 10:10:22 crc kubenswrapper[4813]: W1202 10:10:22.039180 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod526e97b9_958b_4fc1_859b_5b0c10d093c5.slice/crio-54219f97af13171180a318ba8eabab763252dff9f07ca45d402de27184232321 WatchSource:0}: Error finding container 54219f97af13171180a318ba8eabab763252dff9f07ca45d402de27184232321: Status 404 returned error can't find the container with id 54219f97af13171180a318ba8eabab763252dff9f07ca45d402de27184232321 Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.084579 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ac7347-94ab-4dea-a494-975ece27927c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"44ac7347-94ab-4dea-a494-975ece27927c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.084692 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ac7347-94ab-4dea-a494-975ece27927c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"44ac7347-94ab-4dea-a494-975ece27927c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.090422 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.188182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ac7347-94ab-4dea-a494-975ece27927c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"44ac7347-94ab-4dea-a494-975ece27927c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.188272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ac7347-94ab-4dea-a494-975ece27927c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"44ac7347-94ab-4dea-a494-975ece27927c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.188370 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ac7347-94ab-4dea-a494-975ece27927c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"44ac7347-94ab-4dea-a494-975ece27927c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.210375 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ac7347-94ab-4dea-a494-975ece27927c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"44ac7347-94ab-4dea-a494-975ece27927c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.261468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.497134 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2t6q" event={"ID":"44363502-e734-4d2e-8f4b-eec2442afe63","Type":"ContainerStarted","Data":"d8e436d4d2dbd5b34d12ebc7209418d6174ffc8f29513335570ee43914a4eb02"} Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.503915 4813 generic.go:334] "Generic (PLEG): container finished" podID="8988c99f-2949-423b-9fc7-406be45a14ff" containerID="f08c4aab45ab02b93d94ddd80f373560214a9e154be7990f8252f3d1167398a8" exitCode=0 Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.503992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2rvt" event={"ID":"8988c99f-2949-423b-9fc7-406be45a14ff","Type":"ContainerDied","Data":"f08c4aab45ab02b93d94ddd80f373560214a9e154be7990f8252f3d1167398a8"} Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.504864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ffq5" event={"ID":"526e97b9-958b-4fc1-859b-5b0c10d093c5","Type":"ContainerStarted","Data":"54219f97af13171180a318ba8eabab763252dff9f07ca45d402de27184232321"} Dec 02 10:10:22 crc kubenswrapper[4813]: W1202 10:10:22.753201 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-05c9a3eab7a29dcf676e07f7ede0600b8d92739c022d5fe99a3ff38decec9ab7 WatchSource:0}: Error finding container 05c9a3eab7a29dcf676e07f7ede0600b8d92739c022d5fe99a3ff38decec9ab7: Status 404 returned error can't find the container with id 05c9a3eab7a29dcf676e07f7ede0600b8d92739c022d5fe99a3ff38decec9ab7 Dec 02 10:10:22 crc kubenswrapper[4813]: I1202 10:10:22.867595 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.512446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"44ac7347-94ab-4dea-a494-975ece27927c","Type":"ContainerStarted","Data":"8def736dcedc1cf26215309d39e5ba8a0c52a77817144992db566404e498188c"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.514721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"50a55aa4a932a4a3b2b4e9de17c12adccff33c929bfa2c525cffd90acfa427cc"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.514785 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"830b137280f837d45d9c21175a96709cef655eb6704647956ea2268d0617b64e"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.523300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"05c9a3eab7a29dcf676e07f7ede0600b8d92739c022d5fe99a3ff38decec9ab7"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.556044 4813 generic.go:334] "Generic (PLEG): container finished" podID="44363502-e734-4d2e-8f4b-eec2442afe63" containerID="389f39e78af5d21b0b879cbdfb6fb617fc4a0ca55a461a3fd359237ee826d2b9" exitCode=0 Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.556191 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2t6q" event={"ID":"44363502-e734-4d2e-8f4b-eec2442afe63","Type":"ContainerDied","Data":"389f39e78af5d21b0b879cbdfb6fb617fc4a0ca55a461a3fd359237ee826d2b9"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.574379 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f267e44b8f7dab50df75f6607087a7b6480fec10422a40d50e2c8cd6d6bcff02"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.574424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5af72d5f0dea480b98714294c6c82fe597a76eda776ec69c41cf34f818bf4f6b"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.574669 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.577378 4813 generic.go:334] "Generic (PLEG): container finished" podID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerID="5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6" exitCode=0 Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.577448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ffq5" event={"ID":"526e97b9-958b-4fc1-859b-5b0c10d093c5","Type":"ContainerDied","Data":"5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.583910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c513235d-3d00-4ce4-a879-ed7f3b32e81e","Type":"ContainerStarted","Data":"a0e6b62cbf127e09561bf819c63ca84fb924d580637ddcce730a4ad2f9277a2f"} Dec 02 10:10:23 crc kubenswrapper[4813]: I1202 10:10:23.650717 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.650696972 podStartE2EDuration="3.650696972s" podCreationTimestamp="2025-12-02 10:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:23.647465003 +0000 UTC m=+147.842639305" watchObservedRunningTime="2025-12-02 10:10:23.650696972 +0000 UTC m=+147.845871274" Dec 02 10:10:24 crc kubenswrapper[4813]: I1202 10:10:24.610325 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f54e441cea25bc2598446e458f168d56bfefd43a970ece3634976ade65470f52"} Dec 02 10:10:24 crc kubenswrapper[4813]: I1202 10:10:24.624263 4813 generic.go:334] "Generic (PLEG): container finished" podID="c513235d-3d00-4ce4-a879-ed7f3b32e81e" containerID="a0e6b62cbf127e09561bf819c63ca84fb924d580637ddcce730a4ad2f9277a2f" exitCode=0 Dec 02 10:10:24 crc kubenswrapper[4813]: I1202 10:10:24.624398 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c513235d-3d00-4ce4-a879-ed7f3b32e81e","Type":"ContainerDied","Data":"a0e6b62cbf127e09561bf819c63ca84fb924d580637ddcce730a4ad2f9277a2f"} Dec 02 10:10:24 crc kubenswrapper[4813]: I1202 10:10:24.654966 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"44ac7347-94ab-4dea-a494-975ece27927c","Type":"ContainerStarted","Data":"e64cd10f51fcffa6c974b27e53636e77e888e69e3ea8748069570d52c7ff6188"} Dec 02 10:10:24 crc kubenswrapper[4813]: I1202 10:10:24.718977 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.7189554190000003 podStartE2EDuration="3.718955419s" podCreationTimestamp="2025-12-02 10:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:24.707772456 +0000 UTC m=+148.902946758" watchObservedRunningTime="2025-12-02 10:10:24.718955419 +0000 UTC m=+148.914129721" Dec 02 10:10:25 crc kubenswrapper[4813]: I1202 10:10:25.663924 4813 generic.go:334] "Generic (PLEG): container finished" podID="44ac7347-94ab-4dea-a494-975ece27927c" containerID="e64cd10f51fcffa6c974b27e53636e77e888e69e3ea8748069570d52c7ff6188" exitCode=0 Dec 02 10:10:25 crc kubenswrapper[4813]: I1202 10:10:25.664006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"44ac7347-94ab-4dea-a494-975ece27927c","Type":"ContainerDied","Data":"e64cd10f51fcffa6c974b27e53636e77e888e69e3ea8748069570d52c7ff6188"} Dec 02 10:10:25 crc kubenswrapper[4813]: I1202 10:10:25.862813 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9ztk4" Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.011581 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.080270 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kubelet-dir\") pod \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\" (UID: \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\") " Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.080347 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kube-api-access\") pod \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\" (UID: \"c513235d-3d00-4ce4-a879-ed7f3b32e81e\") " Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.080427 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c513235d-3d00-4ce4-a879-ed7f3b32e81e" (UID: "c513235d-3d00-4ce4-a879-ed7f3b32e81e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.080603 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.086255 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c513235d-3d00-4ce4-a879-ed7f3b32e81e" (UID: "c513235d-3d00-4ce4-a879-ed7f3b32e81e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.181964 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c513235d-3d00-4ce4-a879-ed7f3b32e81e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.672511 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.672501 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c513235d-3d00-4ce4-a879-ed7f3b32e81e","Type":"ContainerDied","Data":"1f52231b6d842d9225854f29c882c0680f55a778b3bdf3386efc066a476e2e7b"} Dec 02 10:10:26 crc kubenswrapper[4813]: I1202 10:10:26.672558 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f52231b6d842d9225854f29c882c0680f55a778b3bdf3386efc066a476e2e7b" Dec 02 10:10:29 crc kubenswrapper[4813]: I1202 10:10:29.377273 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8hbqc" Dec 02 10:10:29 crc kubenswrapper[4813]: I1202 10:10:29.998506 4813 patch_prober.go:28] interesting pod/console-f9d7485db-8dtjd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 02 10:10:29 crc kubenswrapper[4813]: I1202 10:10:29.998581 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8dtjd" podUID="e967798d-a0d2-40e4-af66-ba0d04ac8318" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 02 10:10:31 crc kubenswrapper[4813]: I1202 10:10:31.879206 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.061573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ac7347-94ab-4dea-a494-975ece27927c-kube-api-access\") pod \"44ac7347-94ab-4dea-a494-975ece27927c\" (UID: \"44ac7347-94ab-4dea-a494-975ece27927c\") " Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.061700 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ac7347-94ab-4dea-a494-975ece27927c-kubelet-dir\") pod \"44ac7347-94ab-4dea-a494-975ece27927c\" (UID: \"44ac7347-94ab-4dea-a494-975ece27927c\") " Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.062273 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44ac7347-94ab-4dea-a494-975ece27927c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "44ac7347-94ab-4dea-a494-975ece27927c" (UID: "44ac7347-94ab-4dea-a494-975ece27927c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.063104 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44ac7347-94ab-4dea-a494-975ece27927c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.069462 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ac7347-94ab-4dea-a494-975ece27927c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "44ac7347-94ab-4dea-a494-975ece27927c" (UID: "44ac7347-94ab-4dea-a494-975ece27927c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.165032 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44ac7347-94ab-4dea-a494-975ece27927c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.739670 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"44ac7347-94ab-4dea-a494-975ece27927c","Type":"ContainerDied","Data":"8def736dcedc1cf26215309d39e5ba8a0c52a77817144992db566404e498188c"} Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.739717 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8def736dcedc1cf26215309d39e5ba8a0c52a77817144992db566404e498188c" Dec 02 10:10:32 crc kubenswrapper[4813]: I1202 10:10:32.739768 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 10:10:34 crc kubenswrapper[4813]: I1202 10:10:34.273785 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:10:34 crc kubenswrapper[4813]: I1202 10:10:34.274221 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:10:36 crc kubenswrapper[4813]: I1202 10:10:36.730863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:10:36 crc kubenswrapper[4813]: I1202 10:10:36.740522 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05bb9583-6b23-4207-b709-89dfe49fad73-metrics-certs\") pod \"network-metrics-daemon-62bfc\" (UID: \"05bb9583-6b23-4207-b709-89dfe49fad73\") " pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:10:36 crc kubenswrapper[4813]: I1202 10:10:36.793164 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-62bfc" Dec 02 10:10:37 crc kubenswrapper[4813]: I1202 10:10:37.612186 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:10:40 crc kubenswrapper[4813]: I1202 10:10:40.002895 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:40 crc kubenswrapper[4813]: I1202 10:10:40.007310 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:10:50 crc kubenswrapper[4813]: I1202 10:10:50.381023 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rbpqf" Dec 02 10:10:51 crc kubenswrapper[4813]: E1202 10:10:51.102651 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 10:10:51 crc kubenswrapper[4813]: E1202 10:10:51.102985 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcmkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ljxzg_openshift-marketplace(72dfffc2-b16b-47e4-9e6c-1c5562e48db0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:10:51 crc kubenswrapper[4813]: E1202 10:10:51.104318 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ljxzg" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" Dec 02 10:10:51 crc kubenswrapper[4813]: E1202 10:10:51.958127 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ljxzg" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" Dec 02 10:10:52 crc kubenswrapper[4813]: E1202 10:10:52.711658 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 10:10:52 crc kubenswrapper[4813]: E1202 10:10:52.711895 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6z6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4c78n_openshift-marketplace(dc26ee61-a67c-4200-8cd7-4ca46e748fea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:10:52 crc kubenswrapper[4813]: E1202 10:10:52.713863 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4c78n" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.127289 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4c78n" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.138798 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.138999 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctwvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-csjp9_openshift-marketplace(64b4cb5e-0d3b-437c-8287-599558fd972b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.140208 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-csjp9" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.194992 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.195162 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkwwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r2rvt_openshift-marketplace(8988c99f-2949-423b-9fc7-406be45a14ff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.197158 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r2rvt" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.263858 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.264018 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5khnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mkq9l_openshift-marketplace(fbd02c03-797f-44fd-87f3-465e9198c4e8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.265173 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mkq9l" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.268497 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.268662 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f8f9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7tbf8_openshift-marketplace(40e81749-c104-4999-9e79-eea19913cbc2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:10:53 crc kubenswrapper[4813]: E1202 10:10:53.270768 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7tbf8" podUID="40e81749-c104-4999-9e79-eea19913cbc2" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.140204 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7tbf8" podUID="40e81749-c104-4999-9e79-eea19913cbc2" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.140319 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-csjp9" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.140312 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mkq9l" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.168957 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.169187 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jb4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-g2t6q_openshift-marketplace(44363502-e734-4d2e-8f4b-eec2442afe63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.170845 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-g2t6q" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.177465 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.177838 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5ntb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4ffq5_openshift-marketplace(526e97b9-958b-4fc1-859b-5b0c10d093c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.179026 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4ffq5" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" Dec 02 10:10:56 crc kubenswrapper[4813]: I1202 10:10:56.353611 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-62bfc"] Dec 02 10:10:56 crc kubenswrapper[4813]: I1202 10:10:56.878763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-62bfc" event={"ID":"05bb9583-6b23-4207-b709-89dfe49fad73","Type":"ContainerStarted","Data":"89231b108fe05ad9aca03c3037df8554f31eeb098196f1b7e35b3abdf64881ed"} Dec 02 10:10:56 crc kubenswrapper[4813]: I1202 10:10:56.879805 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-62bfc" event={"ID":"05bb9583-6b23-4207-b709-89dfe49fad73","Type":"ContainerStarted","Data":"70468f64bd1d6c2e96825804aa09cb86df9ee9fdb15a21284c524c0380d09630"} Dec 02 10:10:56 crc kubenswrapper[4813]: I1202 10:10:56.879835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-62bfc" event={"ID":"05bb9583-6b23-4207-b709-89dfe49fad73","Type":"ContainerStarted","Data":"1af3763978b7c89c52e2b95eafd99494daf3d0ea63653bfdb55a945ec6c46ddd"} Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.881592 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-g2t6q" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" Dec 02 10:10:56 crc kubenswrapper[4813]: E1202 10:10:56.881594 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4ffq5" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" Dec 02 10:10:56 crc kubenswrapper[4813]: I1202 10:10:56.919568 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-62bfc" podStartSLOduration=162.919532404 podStartE2EDuration="2m42.919532404s" podCreationTimestamp="2025-12-02 10:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:56.916631475 +0000 UTC m=+181.111805777" watchObservedRunningTime="2025-12-02 10:10:56.919532404 +0000 UTC m=+181.114706696" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.522166 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 10:10:58 crc kubenswrapper[4813]: E1202 10:10:58.522738 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c513235d-3d00-4ce4-a879-ed7f3b32e81e" containerName="pruner" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.522768 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c513235d-3d00-4ce4-a879-ed7f3b32e81e" containerName="pruner" Dec 02 10:10:58 crc kubenswrapper[4813]: E1202 10:10:58.522786 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ac7347-94ab-4dea-a494-975ece27927c" containerName="pruner" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.522794 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ac7347-94ab-4dea-a494-975ece27927c" containerName="pruner" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.522931 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ac7347-94ab-4dea-a494-975ece27927c" containerName="pruner" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.522958 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c513235d-3d00-4ce4-a879-ed7f3b32e81e" containerName="pruner" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.523450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.525622 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.525911 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.528871 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.623828 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.623952 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.724712 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.724797 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.724858 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.746222 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:10:58 crc kubenswrapper[4813]: I1202 10:10:58.841863 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:10:59 crc kubenswrapper[4813]: I1202 10:10:59.028244 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 10:10:59 crc kubenswrapper[4813]: I1202 10:10:59.898131 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1a0057b0-0b3e-4a4b-86a5-611b64986c0f","Type":"ContainerStarted","Data":"cfe1de5f11db0e09e01d384f89d7ddee8790e26c0c8c44352aea1e742b40429e"} Dec 02 10:10:59 crc kubenswrapper[4813]: I1202 10:10:59.898479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1a0057b0-0b3e-4a4b-86a5-611b64986c0f","Type":"ContainerStarted","Data":"8e749dc68322027386632910280c3bffa5e46475af67ae745009e72f1ffe13d5"} Dec 02 10:10:59 crc kubenswrapper[4813]: I1202 10:10:59.915208 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.915187124 podStartE2EDuration="1.915187124s" podCreationTimestamp="2025-12-02 10:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:10:59.914908945 +0000 UTC m=+184.110083257" watchObservedRunningTime="2025-12-02 10:10:59.915187124 +0000 UTC m=+184.110361426" Dec 02 10:11:00 crc kubenswrapper[4813]: I1202 10:11:00.904603 4813 generic.go:334] "Generic (PLEG): container finished" podID="1a0057b0-0b3e-4a4b-86a5-611b64986c0f" containerID="cfe1de5f11db0e09e01d384f89d7ddee8790e26c0c8c44352aea1e742b40429e" exitCode=0 Dec 02 10:11:00 crc kubenswrapper[4813]: I1202 10:11:00.904678 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1a0057b0-0b3e-4a4b-86a5-611b64986c0f","Type":"ContainerDied","Data":"cfe1de5f11db0e09e01d384f89d7ddee8790e26c0c8c44352aea1e742b40429e"} Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.027675 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.121194 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.267955 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kube-api-access\") pod \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\" (UID: \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\") " Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.269038 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kubelet-dir\") pod \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\" (UID: \"1a0057b0-0b3e-4a4b-86a5-611b64986c0f\") " Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.269098 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1a0057b0-0b3e-4a4b-86a5-611b64986c0f" (UID: "1a0057b0-0b3e-4a4b-86a5-611b64986c0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.269214 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.275145 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1a0057b0-0b3e-4a4b-86a5-611b64986c0f" (UID: "1a0057b0-0b3e-4a4b-86a5-611b64986c0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.370563 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a0057b0-0b3e-4a4b-86a5-611b64986c0f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.916608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1a0057b0-0b3e-4a4b-86a5-611b64986c0f","Type":"ContainerDied","Data":"8e749dc68322027386632910280c3bffa5e46475af67ae745009e72f1ffe13d5"} Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.916656 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e749dc68322027386632910280c3bffa5e46475af67ae745009e72f1ffe13d5" Dec 02 10:11:02 crc kubenswrapper[4813]: I1202 10:11:02.916731 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.273805 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.273871 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.330567 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 10:11:04 crc kubenswrapper[4813]: E1202 10:11:04.330908 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0057b0-0b3e-4a4b-86a5-611b64986c0f" containerName="pruner" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.330953 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0057b0-0b3e-4a4b-86a5-611b64986c0f" containerName="pruner" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.331090 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0057b0-0b3e-4a4b-86a5-611b64986c0f" containerName="pruner" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.331581 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.333882 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.334059 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.337661 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.393491 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.393555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-var-lock\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.393777 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kube-api-access\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.494845 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kube-api-access\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.494920 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.494949 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-var-lock\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.495021 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.495061 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-var-lock\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.512803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kube-api-access\") pod \"installer-9-crc\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.648786 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.852296 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 10:11:04 crc kubenswrapper[4813]: I1202 10:11:04.927521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"504f99d9-c9d5-4aa5-a816-d8b54033d4eb","Type":"ContainerStarted","Data":"1939019bd93ade5c0cc4d17b18871440cd0abb41296ba84a31d08a9d13d0d70a"} Dec 02 10:11:05 crc kubenswrapper[4813]: I1202 10:11:05.938638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"504f99d9-c9d5-4aa5-a816-d8b54033d4eb","Type":"ContainerStarted","Data":"d223c403bd82b89547109f95ff4106817dbb50b555691060f23cffce139b3006"} Dec 02 10:11:05 crc kubenswrapper[4813]: I1202 10:11:05.955007 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.954987231 podStartE2EDuration="1.954987231s" podCreationTimestamp="2025-12-02 10:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:11:05.950825883 +0000 UTC m=+190.146000195" watchObservedRunningTime="2025-12-02 10:11:05.954987231 +0000 UTC m=+190.150161543" Dec 02 10:11:06 crc kubenswrapper[4813]: I1202 10:11:06.946534 4813 generic.go:334] "Generic (PLEG): container finished" podID="8988c99f-2949-423b-9fc7-406be45a14ff" containerID="d2afd61a4df6b9d79d8c8066749becc259248f0fb8c72beedf299c79dbe22e5f" exitCode=0 Dec 02 10:11:06 crc kubenswrapper[4813]: I1202 10:11:06.946866 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2rvt" event={"ID":"8988c99f-2949-423b-9fc7-406be45a14ff","Type":"ContainerDied","Data":"d2afd61a4df6b9d79d8c8066749becc259248f0fb8c72beedf299c79dbe22e5f"} Dec 02 10:11:08 crc kubenswrapper[4813]: I1202 10:11:08.962523 4813 generic.go:334] "Generic (PLEG): container finished" podID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerID="552a666bfe13888bae6fea22dfd97abe2d32c523fa9c62a752d59b016874b862" exitCode=0 Dec 02 10:11:08 crc kubenswrapper[4813]: I1202 10:11:08.963062 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljxzg" event={"ID":"72dfffc2-b16b-47e4-9e6c-1c5562e48db0","Type":"ContainerDied","Data":"552a666bfe13888bae6fea22dfd97abe2d32c523fa9c62a752d59b016874b862"} Dec 02 10:11:09 crc kubenswrapper[4813]: I1202 10:11:09.971192 4813 generic.go:334] "Generic (PLEG): container finished" podID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerID="1ed67d33d13bfe3d383e013e36fb2d59fe7ae06d7a95573848c80adc55cc9bef" exitCode=0 Dec 02 10:11:09 crc kubenswrapper[4813]: I1202 10:11:09.971556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c78n" event={"ID":"dc26ee61-a67c-4200-8cd7-4ca46e748fea","Type":"ContainerDied","Data":"1ed67d33d13bfe3d383e013e36fb2d59fe7ae06d7a95573848c80adc55cc9bef"} Dec 02 10:11:09 crc kubenswrapper[4813]: I1202 10:11:09.978676 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljxzg" event={"ID":"72dfffc2-b16b-47e4-9e6c-1c5562e48db0","Type":"ContainerStarted","Data":"99f3a6f4ceb1c3ff32289089741fae6c4572240ba7234aee3c2bccf19af863cf"} Dec 02 10:11:09 crc kubenswrapper[4813]: I1202 10:11:09.981416 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2rvt" event={"ID":"8988c99f-2949-423b-9fc7-406be45a14ff","Type":"ContainerStarted","Data":"f67ecf97bcc377098c69ba6aa68a4cb3b6bfce9c170360afcc483fd4fa759be5"} Dec 02 10:11:09 crc kubenswrapper[4813]: I1202 10:11:09.984246 4813 generic.go:334] "Generic (PLEG): container finished" podID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerID="a7f554db3a098e59a88902168f915f95fce2804ea4259f66dff8c2623e5074d6" exitCode=0 Dec 02 10:11:09 crc kubenswrapper[4813]: I1202 10:11:09.984292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkq9l" event={"ID":"fbd02c03-797f-44fd-87f3-465e9198c4e8","Type":"ContainerDied","Data":"a7f554db3a098e59a88902168f915f95fce2804ea4259f66dff8c2623e5074d6"} Dec 02 10:11:10 crc kubenswrapper[4813]: I1202 10:11:10.040087 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ljxzg" podStartSLOduration=2.733668219 podStartE2EDuration="53.040051717s" podCreationTimestamp="2025-12-02 10:10:17 +0000 UTC" firstStartedPulling="2025-12-02 10:10:19.341643438 +0000 UTC m=+143.536817780" lastFinishedPulling="2025-12-02 10:11:09.648026976 +0000 UTC m=+193.843201278" observedRunningTime="2025-12-02 10:11:10.018582059 +0000 UTC m=+194.213756371" watchObservedRunningTime="2025-12-02 10:11:10.040051717 +0000 UTC m=+194.235226019" Dec 02 10:11:10 crc kubenswrapper[4813]: I1202 10:11:10.056623 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r2rvt" podStartSLOduration=4.713225915 podStartE2EDuration="50.056599155s" podCreationTimestamp="2025-12-02 10:10:20 +0000 UTC" firstStartedPulling="2025-12-02 10:10:23.593683014 +0000 UTC m=+147.788857316" lastFinishedPulling="2025-12-02 10:11:08.937056254 +0000 UTC m=+193.132230556" observedRunningTime="2025-12-02 10:11:10.038463748 +0000 UTC m=+194.233638070" watchObservedRunningTime="2025-12-02 10:11:10.056599155 +0000 UTC m=+194.251773457" Dec 02 10:11:10 crc kubenswrapper[4813]: I1202 10:11:10.388667 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:11:10 crc kubenswrapper[4813]: I1202 10:11:10.388728 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:11:10 crc kubenswrapper[4813]: I1202 10:11:10.447794 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:11:10 crc kubenswrapper[4813]: I1202 10:11:10.992304 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c78n" event={"ID":"dc26ee61-a67c-4200-8cd7-4ca46e748fea","Type":"ContainerStarted","Data":"2ad929ff6e0a868f83c23feb4519e730dc3d80479bcd9a58e617e930b04ebcba"} Dec 02 10:11:10 crc kubenswrapper[4813]: I1202 10:11:10.994150 4813 generic.go:334] "Generic (PLEG): container finished" podID="40e81749-c104-4999-9e79-eea19913cbc2" containerID="2292bb00d009e2192d18b9dc44cbf3e57fab273ddb7422edcc56e63bf94f4532" exitCode=0 Dec 02 10:11:10 crc kubenswrapper[4813]: I1202 10:11:10.994279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tbf8" event={"ID":"40e81749-c104-4999-9e79-eea19913cbc2","Type":"ContainerDied","Data":"2292bb00d009e2192d18b9dc44cbf3e57fab273ddb7422edcc56e63bf94f4532"} Dec 02 10:11:11 crc kubenswrapper[4813]: I1202 10:11:11.000408 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkq9l" event={"ID":"fbd02c03-797f-44fd-87f3-465e9198c4e8","Type":"ContainerStarted","Data":"0142a3dffac089aad9a5eda6bc4c497665ae8b80714feb15c232ed2dcf8e5df4"} Dec 02 10:11:11 crc kubenswrapper[4813]: I1202 10:11:11.017975 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4c78n" podStartSLOduration=2.586410303 podStartE2EDuration="54.017948953s" podCreationTimestamp="2025-12-02 10:10:17 +0000 UTC" firstStartedPulling="2025-12-02 10:10:19.330273679 +0000 UTC m=+143.525447981" lastFinishedPulling="2025-12-02 10:11:10.761812329 +0000 UTC m=+194.956986631" observedRunningTime="2025-12-02 10:11:11.014207938 +0000 UTC m=+195.209382240" watchObservedRunningTime="2025-12-02 10:11:11.017948953 +0000 UTC m=+195.213123265" Dec 02 10:11:11 crc kubenswrapper[4813]: I1202 10:11:11.050341 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mkq9l" podStartSLOduration=1.6553746280000001 podStartE2EDuration="53.050298605s" podCreationTimestamp="2025-12-02 10:10:18 +0000 UTC" firstStartedPulling="2025-12-02 10:10:19.324633886 +0000 UTC m=+143.519808218" lastFinishedPulling="2025-12-02 10:11:10.719557883 +0000 UTC m=+194.914732195" observedRunningTime="2025-12-02 10:11:11.049379567 +0000 UTC m=+195.244553889" watchObservedRunningTime="2025-12-02 10:11:11.050298605 +0000 UTC m=+195.245472907" Dec 02 10:11:13 crc kubenswrapper[4813]: I1202 10:11:13.012371 4813 generic.go:334] "Generic (PLEG): container finished" podID="44363502-e734-4d2e-8f4b-eec2442afe63" containerID="b42530056a58f07329eb640cdf41407605c1d1df0eb82ae8354ff3b726f37954" exitCode=0 Dec 02 10:11:13 crc kubenswrapper[4813]: I1202 10:11:13.012492 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2t6q" event={"ID":"44363502-e734-4d2e-8f4b-eec2442afe63","Type":"ContainerDied","Data":"b42530056a58f07329eb640cdf41407605c1d1df0eb82ae8354ff3b726f37954"} Dec 02 10:11:13 crc kubenswrapper[4813]: I1202 10:11:13.014456 4813 generic.go:334] "Generic (PLEG): container finished" podID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerID="9363524a92cace456a56826e194b771bba60f7d94c392e235b305f28bf1eda42" exitCode=0 Dec 02 10:11:13 crc kubenswrapper[4813]: I1202 10:11:13.014495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csjp9" event={"ID":"64b4cb5e-0d3b-437c-8287-599558fd972b","Type":"ContainerDied","Data":"9363524a92cace456a56826e194b771bba60f7d94c392e235b305f28bf1eda42"} Dec 02 10:11:13 crc kubenswrapper[4813]: I1202 10:11:13.017208 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ffq5" event={"ID":"526e97b9-958b-4fc1-859b-5b0c10d093c5","Type":"ContainerStarted","Data":"2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b"} Dec 02 10:11:13 crc kubenswrapper[4813]: I1202 10:11:13.020008 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tbf8" event={"ID":"40e81749-c104-4999-9e79-eea19913cbc2","Type":"ContainerStarted","Data":"a890e5fb6b272035351e2a062e3211761f145a069ced2e5ee8cdcac0ae016808"} Dec 02 10:11:13 crc kubenswrapper[4813]: I1202 10:11:13.056481 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7tbf8" podStartSLOduration=2.245119363 podStartE2EDuration="55.056460683s" podCreationTimestamp="2025-12-02 10:10:18 +0000 UTC" firstStartedPulling="2025-12-02 10:10:19.338347277 +0000 UTC m=+143.533521579" lastFinishedPulling="2025-12-02 10:11:12.149688597 +0000 UTC m=+196.344862899" observedRunningTime="2025-12-02 10:11:13.055960828 +0000 UTC m=+197.251135140" watchObservedRunningTime="2025-12-02 10:11:13.056460683 +0000 UTC m=+197.251634985" Dec 02 10:11:14 crc kubenswrapper[4813]: I1202 10:11:14.027274 4813 generic.go:334] "Generic (PLEG): container finished" podID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerID="2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b" exitCode=0 Dec 02 10:11:14 crc kubenswrapper[4813]: I1202 10:11:14.027330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ffq5" event={"ID":"526e97b9-958b-4fc1-859b-5b0c10d093c5","Type":"ContainerDied","Data":"2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b"} Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.023023 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.023507 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.079531 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.118684 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.192034 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.192836 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.230615 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.396524 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.396587 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.432957 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.602845 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.602916 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:11:18 crc kubenswrapper[4813]: I1202 10:11:18.643342 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:11:19 crc kubenswrapper[4813]: I1202 10:11:19.057130 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csjp9" event={"ID":"64b4cb5e-0d3b-437c-8287-599558fd972b","Type":"ContainerStarted","Data":"e6a612d4a513f010b2815621410c3da88e479593fd71d49b132a9cc17ea32111"} Dec 02 10:11:19 crc kubenswrapper[4813]: I1202 10:11:19.102785 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:11:19 crc kubenswrapper[4813]: I1202 10:11:19.103176 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:11:19 crc kubenswrapper[4813]: I1202 10:11:19.103235 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:11:20 crc kubenswrapper[4813]: I1202 10:11:20.082178 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-csjp9" podStartSLOduration=6.193286088 podStartE2EDuration="1m1.082157073s" podCreationTimestamp="2025-12-02 10:10:19 +0000 UTC" firstStartedPulling="2025-12-02 10:10:21.458238641 +0000 UTC m=+145.653412943" lastFinishedPulling="2025-12-02 10:11:16.347109626 +0000 UTC m=+200.542283928" observedRunningTime="2025-12-02 10:11:20.080680167 +0000 UTC m=+204.275854479" watchObservedRunningTime="2025-12-02 10:11:20.082157073 +0000 UTC m=+204.277331375" Dec 02 10:11:20 crc kubenswrapper[4813]: I1202 10:11:20.298271 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7tbf8"] Dec 02 10:11:20 crc kubenswrapper[4813]: I1202 10:11:20.432007 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:11:20 crc kubenswrapper[4813]: I1202 10:11:20.498424 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkq9l"] Dec 02 10:11:21 crc kubenswrapper[4813]: I1202 10:11:21.075949 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7tbf8" podUID="40e81749-c104-4999-9e79-eea19913cbc2" containerName="registry-server" containerID="cri-o://a890e5fb6b272035351e2a062e3211761f145a069ced2e5ee8cdcac0ae016808" gracePeriod=2 Dec 02 10:11:21 crc kubenswrapper[4813]: I1202 10:11:21.076092 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mkq9l" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerName="registry-server" containerID="cri-o://0142a3dffac089aad9a5eda6bc4c497665ae8b80714feb15c232ed2dcf8e5df4" gracePeriod=2 Dec 02 10:11:22 crc kubenswrapper[4813]: I1202 10:11:22.699314 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2rvt"] Dec 02 10:11:22 crc kubenswrapper[4813]: I1202 10:11:22.700201 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r2rvt" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" containerName="registry-server" containerID="cri-o://f67ecf97bcc377098c69ba6aa68a4cb3b6bfce9c170360afcc483fd4fa759be5" gracePeriod=2 Dec 02 10:11:23 crc kubenswrapper[4813]: I1202 10:11:23.092653 4813 generic.go:334] "Generic (PLEG): container finished" podID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerID="0142a3dffac089aad9a5eda6bc4c497665ae8b80714feb15c232ed2dcf8e5df4" exitCode=0 Dec 02 10:11:23 crc kubenswrapper[4813]: I1202 10:11:23.092707 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkq9l" event={"ID":"fbd02c03-797f-44fd-87f3-465e9198c4e8","Type":"ContainerDied","Data":"0142a3dffac089aad9a5eda6bc4c497665ae8b80714feb15c232ed2dcf8e5df4"} Dec 02 10:11:24 crc kubenswrapper[4813]: I1202 10:11:24.102117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tbf8" event={"ID":"40e81749-c104-4999-9e79-eea19913cbc2","Type":"ContainerDied","Data":"a890e5fb6b272035351e2a062e3211761f145a069ced2e5ee8cdcac0ae016808"} Dec 02 10:11:24 crc kubenswrapper[4813]: I1202 10:11:24.102116 4813 generic.go:334] "Generic (PLEG): container finished" podID="40e81749-c104-4999-9e79-eea19913cbc2" containerID="a890e5fb6b272035351e2a062e3211761f145a069ced2e5ee8cdcac0ae016808" exitCode=0 Dec 02 10:11:24 crc kubenswrapper[4813]: I1202 10:11:24.925661 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.000437 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.065010 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-catalog-content\") pod \"fbd02c03-797f-44fd-87f3-465e9198c4e8\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.065113 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-utilities\") pod \"fbd02c03-797f-44fd-87f3-465e9198c4e8\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.065193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5khnz\" (UniqueName: \"kubernetes.io/projected/fbd02c03-797f-44fd-87f3-465e9198c4e8-kube-api-access-5khnz\") pod \"fbd02c03-797f-44fd-87f3-465e9198c4e8\" (UID: \"fbd02c03-797f-44fd-87f3-465e9198c4e8\") " Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.066482 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-utilities" (OuterVolumeSpecName: "utilities") pod "fbd02c03-797f-44fd-87f3-465e9198c4e8" (UID: "fbd02c03-797f-44fd-87f3-465e9198c4e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.071633 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd02c03-797f-44fd-87f3-465e9198c4e8-kube-api-access-5khnz" (OuterVolumeSpecName: "kube-api-access-5khnz") pod "fbd02c03-797f-44fd-87f3-465e9198c4e8" (UID: "fbd02c03-797f-44fd-87f3-465e9198c4e8"). InnerVolumeSpecName "kube-api-access-5khnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.110266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkq9l" event={"ID":"fbd02c03-797f-44fd-87f3-465e9198c4e8","Type":"ContainerDied","Data":"78a1afb4b6d1792d1a2d803abcf670e867cac11097a1085b4de6e636a4e18c5f"} Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.111121 4813 scope.go:117] "RemoveContainer" containerID="0142a3dffac089aad9a5eda6bc4c497665ae8b80714feb15c232ed2dcf8e5df4" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.110311 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkq9l" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.117726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tbf8" event={"ID":"40e81749-c104-4999-9e79-eea19913cbc2","Type":"ContainerDied","Data":"14f5862bbab5c9b1d8936dbcf414c637811879c213691d0789e7bd45f0210bc0"} Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.117801 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tbf8" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.120238 4813 generic.go:334] "Generic (PLEG): container finished" podID="8988c99f-2949-423b-9fc7-406be45a14ff" containerID="f67ecf97bcc377098c69ba6aa68a4cb3b6bfce9c170360afcc483fd4fa759be5" exitCode=0 Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.120286 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2rvt" event={"ID":"8988c99f-2949-423b-9fc7-406be45a14ff","Type":"ContainerDied","Data":"f67ecf97bcc377098c69ba6aa68a4cb3b6bfce9c170360afcc483fd4fa759be5"} Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.130695 4813 scope.go:117] "RemoveContainer" containerID="a7f554db3a098e59a88902168f915f95fce2804ea4259f66dff8c2623e5074d6" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.146202 4813 scope.go:117] "RemoveContainer" containerID="f0e4293d88063cc21d1944d400c18a60bf1626b658ef37498eee868b9b893555" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.166611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-catalog-content\") pod \"40e81749-c104-4999-9e79-eea19913cbc2\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.166682 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f8f9\" (UniqueName: \"kubernetes.io/projected/40e81749-c104-4999-9e79-eea19913cbc2-kube-api-access-9f8f9\") pod \"40e81749-c104-4999-9e79-eea19913cbc2\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.166797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-utilities\") pod \"40e81749-c104-4999-9e79-eea19913cbc2\" (UID: \"40e81749-c104-4999-9e79-eea19913cbc2\") " Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.167193 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5khnz\" (UniqueName: \"kubernetes.io/projected/fbd02c03-797f-44fd-87f3-465e9198c4e8-kube-api-access-5khnz\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.167210 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.168159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-utilities" (OuterVolumeSpecName: "utilities") pod "40e81749-c104-4999-9e79-eea19913cbc2" (UID: "40e81749-c104-4999-9e79-eea19913cbc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.173379 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e81749-c104-4999-9e79-eea19913cbc2-kube-api-access-9f8f9" (OuterVolumeSpecName: "kube-api-access-9f8f9") pod "40e81749-c104-4999-9e79-eea19913cbc2" (UID: "40e81749-c104-4999-9e79-eea19913cbc2"). InnerVolumeSpecName "kube-api-access-9f8f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.214428 4813 scope.go:117] "RemoveContainer" containerID="a890e5fb6b272035351e2a062e3211761f145a069ced2e5ee8cdcac0ae016808" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.237131 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbd02c03-797f-44fd-87f3-465e9198c4e8" (UID: "fbd02c03-797f-44fd-87f3-465e9198c4e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.238169 4813 scope.go:117] "RemoveContainer" containerID="2292bb00d009e2192d18b9dc44cbf3e57fab273ddb7422edcc56e63bf94f4532" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.262599 4813 scope.go:117] "RemoveContainer" containerID="83fe8cb8e58860cde3d8cb851913f25afea91e0ecf8be5ab963f55ff3ebba60d" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.268195 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.268231 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbd02c03-797f-44fd-87f3-465e9198c4e8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.268240 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f8f9\" (UniqueName: \"kubernetes.io/projected/40e81749-c104-4999-9e79-eea19913cbc2-kube-api-access-9f8f9\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.274360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40e81749-c104-4999-9e79-eea19913cbc2" (UID: "40e81749-c104-4999-9e79-eea19913cbc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.369791 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40e81749-c104-4999-9e79-eea19913cbc2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.447603 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7tbf8"] Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.450293 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7tbf8"] Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.521639 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkq9l"] Dec 02 10:11:25 crc kubenswrapper[4813]: I1202 10:11:25.525695 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mkq9l"] Dec 02 10:11:26 crc kubenswrapper[4813]: I1202 10:11:26.075344 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e81749-c104-4999-9e79-eea19913cbc2" path="/var/lib/kubelet/pods/40e81749-c104-4999-9e79-eea19913cbc2/volumes" Dec 02 10:11:26 crc kubenswrapper[4813]: I1202 10:11:26.075949 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" path="/var/lib/kubelet/pods/fbd02c03-797f-44fd-87f3-465e9198c4e8/volumes" Dec 02 10:11:26 crc kubenswrapper[4813]: I1202 10:11:26.857119 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:11:26 crc kubenswrapper[4813]: I1202 10:11:26.988536 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-utilities\") pod \"8988c99f-2949-423b-9fc7-406be45a14ff\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " Dec 02 10:11:26 crc kubenswrapper[4813]: I1202 10:11:26.988633 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkwwd\" (UniqueName: \"kubernetes.io/projected/8988c99f-2949-423b-9fc7-406be45a14ff-kube-api-access-hkwwd\") pod \"8988c99f-2949-423b-9fc7-406be45a14ff\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " Dec 02 10:11:26 crc kubenswrapper[4813]: I1202 10:11:26.988692 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-catalog-content\") pod \"8988c99f-2949-423b-9fc7-406be45a14ff\" (UID: \"8988c99f-2949-423b-9fc7-406be45a14ff\") " Dec 02 10:11:26 crc kubenswrapper[4813]: I1202 10:11:26.989557 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-utilities" (OuterVolumeSpecName: "utilities") pod "8988c99f-2949-423b-9fc7-406be45a14ff" (UID: "8988c99f-2949-423b-9fc7-406be45a14ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:11:26 crc kubenswrapper[4813]: I1202 10:11:26.993924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8988c99f-2949-423b-9fc7-406be45a14ff-kube-api-access-hkwwd" (OuterVolumeSpecName: "kube-api-access-hkwwd") pod "8988c99f-2949-423b-9fc7-406be45a14ff" (UID: "8988c99f-2949-423b-9fc7-406be45a14ff"). InnerVolumeSpecName "kube-api-access-hkwwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.010113 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8988c99f-2949-423b-9fc7-406be45a14ff" (UID: "8988c99f-2949-423b-9fc7-406be45a14ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.090013 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.090064 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkwwd\" (UniqueName: \"kubernetes.io/projected/8988c99f-2949-423b-9fc7-406be45a14ff-kube-api-access-hkwwd\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.090100 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8988c99f-2949-423b-9fc7-406be45a14ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.136246 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2t6q" event={"ID":"44363502-e734-4d2e-8f4b-eec2442afe63","Type":"ContainerStarted","Data":"9fe92abf84d632f2c9974f34c46d5f0d0096577a8ac82ce0e60af71d60c856fc"} Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.139445 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2rvt" event={"ID":"8988c99f-2949-423b-9fc7-406be45a14ff","Type":"ContainerDied","Data":"f5392d777fe4c37d50fe4d0a9ed2a2783e463cdbb9f313c1a772165bfb846959"} Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.139508 4813 scope.go:117] "RemoveContainer" containerID="f67ecf97bcc377098c69ba6aa68a4cb3b6bfce9c170360afcc483fd4fa759be5" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.139513 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2rvt" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.153470 4813 scope.go:117] "RemoveContainer" containerID="d2afd61a4df6b9d79d8c8066749becc259248f0fb8c72beedf299c79dbe22e5f" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.161386 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g2t6q" podStartSLOduration=5.910027665 podStartE2EDuration="1m7.161363546s" podCreationTimestamp="2025-12-02 10:10:20 +0000 UTC" firstStartedPulling="2025-12-02 10:10:23.565952934 +0000 UTC m=+147.761127236" lastFinishedPulling="2025-12-02 10:11:24.817288815 +0000 UTC m=+209.012463117" observedRunningTime="2025-12-02 10:11:27.159435036 +0000 UTC m=+211.354609338" watchObservedRunningTime="2025-12-02 10:11:27.161363546 +0000 UTC m=+211.356537848" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.178412 4813 scope.go:117] "RemoveContainer" containerID="f08c4aab45ab02b93d94ddd80f373560214a9e154be7990f8252f3d1167398a8" Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.181556 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2rvt"] Dec 02 10:11:27 crc kubenswrapper[4813]: I1202 10:11:27.184711 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2rvt"] Dec 02 10:11:28 crc kubenswrapper[4813]: I1202 10:11:28.074867 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" path="/var/lib/kubelet/pods/8988c99f-2949-423b-9fc7-406be45a14ff/volumes" Dec 02 10:11:28 crc kubenswrapper[4813]: I1202 10:11:28.148764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ffq5" event={"ID":"526e97b9-958b-4fc1-859b-5b0c10d093c5","Type":"ContainerStarted","Data":"8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076"} Dec 02 10:11:29 crc kubenswrapper[4813]: I1202 10:11:29.176239 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ffq5" podStartSLOduration=4.786545032 podStartE2EDuration="1m8.176215486s" podCreationTimestamp="2025-12-02 10:10:21 +0000 UTC" firstStartedPulling="2025-12-02 10:10:23.580374656 +0000 UTC m=+147.775548958" lastFinishedPulling="2025-12-02 10:11:26.9700451 +0000 UTC m=+211.165219412" observedRunningTime="2025-12-02 10:11:29.172891332 +0000 UTC m=+213.368065634" watchObservedRunningTime="2025-12-02 10:11:29.176215486 +0000 UTC m=+213.371389788" Dec 02 10:11:29 crc kubenswrapper[4813]: I1202 10:11:29.182263 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g8r9r"] Dec 02 10:11:29 crc kubenswrapper[4813]: I1202 10:11:29.974588 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:11:29 crc kubenswrapper[4813]: I1202 10:11:29.974932 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:11:30 crc kubenswrapper[4813]: I1202 10:11:30.015111 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:11:30 crc kubenswrapper[4813]: I1202 10:11:30.199564 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:11:31 crc kubenswrapper[4813]: I1202 10:11:31.200770 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:11:31 crc kubenswrapper[4813]: I1202 10:11:31.201132 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:11:31 crc kubenswrapper[4813]: I1202 10:11:31.604455 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:11:31 crc kubenswrapper[4813]: I1202 10:11:31.604521 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:11:32 crc kubenswrapper[4813]: I1202 10:11:32.239872 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g2t6q" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="registry-server" probeResult="failure" output=< Dec 02 10:11:32 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 10:11:32 crc kubenswrapper[4813]: > Dec 02 10:11:32 crc kubenswrapper[4813]: I1202 10:11:32.639528 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4ffq5" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="registry-server" probeResult="failure" output=< Dec 02 10:11:32 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 10:11:32 crc kubenswrapper[4813]: > Dec 02 10:11:34 crc kubenswrapper[4813]: I1202 10:11:34.273725 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:11:34 crc kubenswrapper[4813]: I1202 10:11:34.273803 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:11:34 crc kubenswrapper[4813]: I1202 10:11:34.273855 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:11:34 crc kubenswrapper[4813]: I1202 10:11:34.274400 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:11:34 crc kubenswrapper[4813]: I1202 10:11:34.274513 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a" gracePeriod=600 Dec 02 10:11:36 crc kubenswrapper[4813]: I1202 10:11:36.193343 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a" exitCode=0 Dec 02 10:11:36 crc kubenswrapper[4813]: I1202 10:11:36.193486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a"} Dec 02 10:11:37 crc kubenswrapper[4813]: I1202 10:11:37.201605 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"a250494ac86f85dad09fbc8d2f9fa5868e3037f669dedba3f0be3dccef6a657e"} Dec 02 10:11:41 crc kubenswrapper[4813]: I1202 10:11:41.285956 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:11:41 crc kubenswrapper[4813]: I1202 10:11:41.338271 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:11:41 crc kubenswrapper[4813]: I1202 10:11:41.655016 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:11:41 crc kubenswrapper[4813]: I1202 10:11:41.712767 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.988967 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989396 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e81749-c104-4999-9e79-eea19913cbc2" containerName="extract-content" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989419 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e81749-c104-4999-9e79-eea19913cbc2" containerName="extract-content" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989433 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" containerName="extract-content" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989442 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" containerName="extract-content" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989455 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" containerName="extract-utilities" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989466 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" containerName="extract-utilities" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989477 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989485 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989502 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerName="extract-utilities" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989510 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerName="extract-utilities" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989521 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e81749-c104-4999-9e79-eea19913cbc2" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989529 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e81749-c104-4999-9e79-eea19913cbc2" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989541 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989548 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989558 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerName="extract-content" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989565 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerName="extract-content" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.989577 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e81749-c104-4999-9e79-eea19913cbc2" containerName="extract-utilities" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989584 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e81749-c104-4999-9e79-eea19913cbc2" containerName="extract-utilities" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989712 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8988c99f-2949-423b-9fc7-406be45a14ff" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989726 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd02c03-797f-44fd-87f3-465e9198c4e8" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.989733 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e81749-c104-4999-9e79-eea19913cbc2" containerName="registry-server" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.990260 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.990489 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.990675 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb" gracePeriod=15 Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.990703 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff" gracePeriod=15 Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.990836 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856" gracePeriod=15 Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.990866 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa" gracePeriod=15 Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.990829 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425" gracePeriod=15 Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.991423 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.991695 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.991725 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.991737 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.991747 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.991761 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.991770 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.991782 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.991790 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.991809 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.991910 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.991921 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.991926 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.992101 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.992122 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.992133 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.992142 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.992154 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.992167 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 10:11:42 crc kubenswrapper[4813]: E1202 10:11:42.992335 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:11:42 crc kubenswrapper[4813]: I1202 10:11:42.992347 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.028762 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.076602 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.076676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.076722 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.076799 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.076856 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.076897 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.076921 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.076937 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.177698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.177762 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.177796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.177811 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.177870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.177916 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.177826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.177993 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.178167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.178190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.178248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.178204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.178321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.178362 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.178373 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.178451 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.233555 4813 generic.go:334] "Generic (PLEG): container finished" podID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" containerID="d223c403bd82b89547109f95ff4106817dbb50b555691060f23cffce139b3006" exitCode=0 Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.233617 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"504f99d9-c9d5-4aa5-a816-d8b54033d4eb","Type":"ContainerDied","Data":"d223c403bd82b89547109f95ff4106817dbb50b555691060f23cffce139b3006"} Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.234450 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.234680 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.234855 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.237245 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.239480 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.240286 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425" exitCode=0 Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.240311 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff" exitCode=0 Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.240321 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856" exitCode=0 Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.240330 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa" exitCode=2 Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.240356 4813 scope.go:117] "RemoveContainer" containerID="549be957e02ac8c55a1b0dce6eb135192fd23801c3a567422a6e0f6e757fb1ce" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.324615 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:11:43 crc kubenswrapper[4813]: W1202 10:11:43.343614 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e6ae5513a14d98493f8a2abef091de50c7e4da2eea414698f9980c4a98107228 WatchSource:0}: Error finding container e6ae5513a14d98493f8a2abef091de50c7e4da2eea414698f9980c4a98107228: Status 404 returned error can't find the container with id e6ae5513a14d98493f8a2abef091de50c7e4da2eea414698f9980c4a98107228 Dec 02 10:11:43 crc kubenswrapper[4813]: E1202 10:11:43.346766 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d5e4a909a219d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 10:11:43.345713565 +0000 UTC m=+227.540887867,LastTimestamp:2025-12-02 10:11:43.345713565 +0000 UTC m=+227.540887867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.547821 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 02 10:11:43 crc kubenswrapper[4813]: I1202 10:11:43.547919 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.251388 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.254962 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d"} Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.255029 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e6ae5513a14d98493f8a2abef091de50c7e4da2eea414698f9980c4a98107228"} Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.256726 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.257376 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.488829 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.489533 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.489945 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.494130 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kubelet-dir\") pod \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.494203 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kube-api-access\") pod \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.494251 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-var-lock\") pod \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\" (UID: \"504f99d9-c9d5-4aa5-a816-d8b54033d4eb\") " Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.494280 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "504f99d9-c9d5-4aa5-a816-d8b54033d4eb" (UID: "504f99d9-c9d5-4aa5-a816-d8b54033d4eb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.494379 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-var-lock" (OuterVolumeSpecName: "var-lock") pod "504f99d9-c9d5-4aa5-a816-d8b54033d4eb" (UID: "504f99d9-c9d5-4aa5-a816-d8b54033d4eb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.494582 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.494601 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.503270 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "504f99d9-c9d5-4aa5-a816-d8b54033d4eb" (UID: "504f99d9-c9d5-4aa5-a816-d8b54033d4eb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:11:44 crc kubenswrapper[4813]: I1202 10:11:44.595545 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504f99d9-c9d5-4aa5-a816-d8b54033d4eb-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.262899 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.264290 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"504f99d9-c9d5-4aa5-a816-d8b54033d4eb","Type":"ContainerDied","Data":"1939019bd93ade5c0cc4d17b18871440cd0abb41296ba84a31d08a9d13d0d70a"} Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.264342 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1939019bd93ade5c0cc4d17b18871440cd0abb41296ba84a31d08a9d13d0d70a" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.284217 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.284877 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:45 crc kubenswrapper[4813]: E1202 10:11:45.370285 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d5e4a909a219d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 10:11:43.345713565 +0000 UTC m=+227.540887867,LastTimestamp:2025-12-02 10:11:43.345713565 +0000 UTC m=+227.540887867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.376883 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.377658 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.378505 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.379047 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.379522 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.406589 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.406809 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.406888 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.406915 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.406984 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.407137 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.407461 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.407493 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:45 crc kubenswrapper[4813]: I1202 10:11:45.407512 4813 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.071550 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.072335 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.072642 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.075058 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.275229 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.276782 4813 scope.go:117] "RemoveContainer" containerID="6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.276791 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.277132 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb" exitCode=0 Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.277429 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.278285 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.278600 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.280366 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.281014 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.281354 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.298627 4813 scope.go:117] "RemoveContainer" containerID="41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.315129 4813 scope.go:117] "RemoveContainer" containerID="f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.329497 4813 scope.go:117] "RemoveContainer" containerID="2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.343645 4813 scope.go:117] "RemoveContainer" containerID="74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.361018 4813 scope.go:117] "RemoveContainer" containerID="a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.381489 4813 scope.go:117] "RemoveContainer" containerID="6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425" Dec 02 10:11:46 crc kubenswrapper[4813]: E1202 10:11:46.383284 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\": container with ID starting with 6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425 not found: ID does not exist" containerID="6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.383331 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425"} err="failed to get container status \"6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\": rpc error: code = NotFound desc = could not find container \"6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425\": container with ID starting with 6933d35456f0c937186e02b5539dc99706dc7e5b5c3aa40258d1abbc1309d425 not found: ID does not exist" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.383849 4813 scope.go:117] "RemoveContainer" containerID="41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff" Dec 02 10:11:46 crc kubenswrapper[4813]: E1202 10:11:46.385248 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\": container with ID starting with 41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff not found: ID does not exist" containerID="41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.385290 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff"} err="failed to get container status \"41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\": rpc error: code = NotFound desc = could not find container \"41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff\": container with ID starting with 41ba4b4f98c952a2e9cc9a0faa5fdfd66f909d7705ca20c42086403138c902ff not found: ID does not exist" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.385319 4813 scope.go:117] "RemoveContainer" containerID="f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856" Dec 02 10:11:46 crc kubenswrapper[4813]: E1202 10:11:46.385878 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\": container with ID starting with f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856 not found: ID does not exist" containerID="f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.385913 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856"} err="failed to get container status \"f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\": rpc error: code = NotFound desc = could not find container \"f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856\": container with ID starting with f243cc5f02f16a97ebbaf40c4f98ac2462eb72d609a035a94ff43f7a2e883856 not found: ID does not exist" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.385937 4813 scope.go:117] "RemoveContainer" containerID="2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa" Dec 02 10:11:46 crc kubenswrapper[4813]: E1202 10:11:46.386353 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\": container with ID starting with 2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa not found: ID does not exist" containerID="2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.386377 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa"} err="failed to get container status \"2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\": rpc error: code = NotFound desc = could not find container \"2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa\": container with ID starting with 2d4748632d436ef855f51df70c9909a9d14af27d05e262fbf2a36a4e87b24baa not found: ID does not exist" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.386391 4813 scope.go:117] "RemoveContainer" containerID="74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb" Dec 02 10:11:46 crc kubenswrapper[4813]: E1202 10:11:46.386727 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\": container with ID starting with 74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb not found: ID does not exist" containerID="74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.386755 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb"} err="failed to get container status \"74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\": rpc error: code = NotFound desc = could not find container \"74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb\": container with ID starting with 74c07b48c1155a362643e4108b1390863c3e88d06c4612ba7d5de60baa71dceb not found: ID does not exist" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.386773 4813 scope.go:117] "RemoveContainer" containerID="a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa" Dec 02 10:11:46 crc kubenswrapper[4813]: E1202 10:11:46.387152 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\": container with ID starting with a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa not found: ID does not exist" containerID="a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa" Dec 02 10:11:46 crc kubenswrapper[4813]: I1202 10:11:46.387181 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa"} err="failed to get container status \"a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\": rpc error: code = NotFound desc = could not find container \"a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa\": container with ID starting with a2d4a28db2bafbbd4f1c71451f4674da309c837646a6753cdd860069296083fa not found: ID does not exist" Dec 02 10:11:51 crc kubenswrapper[4813]: E1202 10:11:51.071316 4813 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" volumeName="registry-storage" Dec 02 10:11:51 crc kubenswrapper[4813]: E1202 10:11:51.417206 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:51 crc kubenswrapper[4813]: E1202 10:11:51.417597 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:51 crc kubenswrapper[4813]: E1202 10:11:51.417999 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:51 crc kubenswrapper[4813]: E1202 10:11:51.418420 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:51 crc kubenswrapper[4813]: E1202 10:11:51.418819 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:51 crc kubenswrapper[4813]: I1202 10:11:51.418852 4813 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 02 10:11:51 crc kubenswrapper[4813]: E1202 10:11:51.419148 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Dec 02 10:11:51 crc kubenswrapper[4813]: E1202 10:11:51.620099 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Dec 02 10:11:52 crc kubenswrapper[4813]: E1202 10:11:52.020964 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Dec 02 10:11:52 crc kubenswrapper[4813]: E1202 10:11:52.822311 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.210652 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" containerName="oauth-openshift" containerID="cri-o://4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535" gracePeriod=15 Dec 02 10:11:54 crc kubenswrapper[4813]: E1202 10:11:54.423180 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="3.2s" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.571356 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.572369 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.573052 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.573331 4813 status_manager.go:851] "Failed to get status for pod" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g8r9r\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626468 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-service-ca\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-policies\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626542 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-provider-selection\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626572 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-error\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626600 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-serving-cert\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626641 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-login\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626657 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-idp-0-file-data\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626677 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-session\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7bx9\" (UniqueName: \"kubernetes.io/projected/c5909f8e-1a62-455a-a85a-73d85747e3a7-kube-api-access-x7bx9\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-ocp-branding-template\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-dir\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626790 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-cliconfig\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-router-certs\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.626837 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-trusted-ca-bundle\") pod \"c5909f8e-1a62-455a-a85a-73d85747e3a7\" (UID: \"c5909f8e-1a62-455a-a85a-73d85747e3a7\") " Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.627384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.627910 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.628024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.628310 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.628531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.634990 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.635058 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5909f8e-1a62-455a-a85a-73d85747e3a7-kube-api-access-x7bx9" (OuterVolumeSpecName: "kube-api-access-x7bx9") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "kube-api-access-x7bx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.643008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.643234 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.643687 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.644740 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.644851 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.645824 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.646165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c5909f8e-1a62-455a-a85a-73d85747e3a7" (UID: "c5909f8e-1a62-455a-a85a-73d85747e3a7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727841 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727890 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727910 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727923 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727936 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727946 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727957 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727969 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727980 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7bx9\" (UniqueName: \"kubernetes.io/projected/c5909f8e-1a62-455a-a85a-73d85747e3a7-kube-api-access-x7bx9\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.727990 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.728003 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5909f8e-1a62-455a-a85a-73d85747e3a7-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.728015 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.728026 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:54 crc kubenswrapper[4813]: I1202 10:11:54.728037 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5909f8e-1a62-455a-a85a-73d85747e3a7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.329599 4813 generic.go:334] "Generic (PLEG): container finished" podID="c5909f8e-1a62-455a-a85a-73d85747e3a7" containerID="4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535" exitCode=0 Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.329663 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" event={"ID":"c5909f8e-1a62-455a-a85a-73d85747e3a7","Type":"ContainerDied","Data":"4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535"} Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.329701 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" event={"ID":"c5909f8e-1a62-455a-a85a-73d85747e3a7","Type":"ContainerDied","Data":"982ad2e381092bee947bb39b7ed0b3919dba3e048546234a5ef60a70bec08f20"} Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.329721 4813 scope.go:117] "RemoveContainer" containerID="4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.329834 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.330857 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.331216 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.331731 4813 status_manager.go:851] "Failed to get status for pod" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g8r9r\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.354228 4813 scope.go:117] "RemoveContainer" containerID="4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535" Dec 02 10:11:55 crc kubenswrapper[4813]: E1202 10:11:55.355947 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535\": container with ID starting with 4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535 not found: ID does not exist" containerID="4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.355997 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535"} err="failed to get container status \"4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535\": rpc error: code = NotFound desc = could not find container \"4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535\": container with ID starting with 4402bf16422b2b5a711e2fdeeeb8411b9693cec0d9ad8bfa221453ad02c46535 not found: ID does not exist" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.356791 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.357495 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:55 crc kubenswrapper[4813]: I1202 10:11:55.358106 4813 status_manager.go:851] "Failed to get status for pod" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g8r9r\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:55 crc kubenswrapper[4813]: E1202 10:11:55.371168 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d5e4a909a219d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 10:11:43.345713565 +0000 UTC m=+227.540887867,LastTimestamp:2025-12-02 10:11:43.345713565 +0000 UTC m=+227.540887867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 10:11:56 crc kubenswrapper[4813]: I1202 10:11:56.072391 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:56 crc kubenswrapper[4813]: I1202 10:11:56.072872 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:56 crc kubenswrapper[4813]: I1202 10:11:56.073362 4813 status_manager.go:851] "Failed to get status for pod" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g8r9r\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:57 crc kubenswrapper[4813]: I1202 10:11:57.341255 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 10:11:57 crc kubenswrapper[4813]: I1202 10:11:57.341305 4813 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659" exitCode=1 Dec 02 10:11:57 crc kubenswrapper[4813]: I1202 10:11:57.341337 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659"} Dec 02 10:11:57 crc kubenswrapper[4813]: I1202 10:11:57.341820 4813 scope.go:117] "RemoveContainer" containerID="251a42218b0b57fcfeb234a29b61a7d4e8fff7ab7a43379bcf3113f6ccefb659" Dec 02 10:11:57 crc kubenswrapper[4813]: I1202 10:11:57.342771 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:57 crc kubenswrapper[4813]: I1202 10:11:57.342996 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:57 crc kubenswrapper[4813]: I1202 10:11:57.343218 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:57 crc kubenswrapper[4813]: I1202 10:11:57.343571 4813 status_manager.go:851] "Failed to get status for pod" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g8r9r\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:57 crc kubenswrapper[4813]: E1202 10:11:57.625433 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="6.4s" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.067573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.068842 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.069581 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.070192 4813 status_manager.go:851] "Failed to get status for pod" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g8r9r\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.070581 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.082009 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.082043 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:11:58 crc kubenswrapper[4813]: E1202 10:11:58.082475 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.083000 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:58 crc kubenswrapper[4813]: W1202 10:11:58.102907 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d4d6f69dd29ae6c59b451d514c5465fe3ebafa8b19dbe6cd4629322c0c1f8770 WatchSource:0}: Error finding container d4d6f69dd29ae6c59b451d514c5465fe3ebafa8b19dbe6cd4629322c0c1f8770: Status 404 returned error can't find the container with id d4d6f69dd29ae6c59b451d514c5465fe3ebafa8b19dbe6cd4629322c0c1f8770 Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.351023 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.352379 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"489ca49c5b6cff4b92c26d19eb66db7733eb0534b728aada44eaa71026a3259e"} Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.353842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d4d6f69dd29ae6c59b451d514c5465fe3ebafa8b19dbe6cd4629322c0c1f8770"} Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.353896 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.354420 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.354763 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:58 crc kubenswrapper[4813]: I1202 10:11:58.355273 4813 status_manager.go:851] "Failed to get status for pod" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g8r9r\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:59 crc kubenswrapper[4813]: I1202 10:11:59.360655 4813 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c94624336aa31b84896eaea6771228dc665da5cc0a5681397ac631f7be0d2405" exitCode=0 Dec 02 10:11:59 crc kubenswrapper[4813]: I1202 10:11:59.360729 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c94624336aa31b84896eaea6771228dc665da5cc0a5681397ac631f7be0d2405"} Dec 02 10:11:59 crc kubenswrapper[4813]: I1202 10:11:59.361197 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:11:59 crc kubenswrapper[4813]: I1202 10:11:59.361235 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:11:59 crc kubenswrapper[4813]: E1202 10:11:59.361739 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:11:59 crc kubenswrapper[4813]: I1202 10:11:59.361817 4813 status_manager.go:851] "Failed to get status for pod" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:59 crc kubenswrapper[4813]: I1202 10:11:59.362366 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:59 crc kubenswrapper[4813]: I1202 10:11:59.362797 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:11:59 crc kubenswrapper[4813]: I1202 10:11:59.363307 4813 status_manager.go:851] "Failed to get status for pod" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" pod="openshift-authentication/oauth-openshift-558db77b4-g8r9r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g8r9r\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 02 10:12:00 crc kubenswrapper[4813]: I1202 10:12:00.368483 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c5f84986ba6e9363c3a3ada26e170b65d44c08c1ace39c50c63f6617d9c2730"} Dec 02 10:12:00 crc kubenswrapper[4813]: I1202 10:12:00.368934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"da79c05147a7645bda826d0f63bf83e1f02bbd95a14eee5d45c09ab1dc63be5c"} Dec 02 10:12:00 crc kubenswrapper[4813]: I1202 10:12:00.368945 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1046c1a76b1fcc0ed02f30cf546670f4d95e058182b781f78de492556af54011"} Dec 02 10:12:00 crc kubenswrapper[4813]: I1202 10:12:00.368953 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"581cac12ec1c1c32afe55c7a72e88dba82f16acf207908eb0d4e2dcd6ae71671"} Dec 02 10:12:01 crc kubenswrapper[4813]: I1202 10:12:01.378065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0c775de6f3d2cad0794382a07d0c1f5b7cc29f37336b0fb5ae052088a241a8ac"} Dec 02 10:12:01 crc kubenswrapper[4813]: I1202 10:12:01.378292 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:12:01 crc kubenswrapper[4813]: I1202 10:12:01.378697 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:12:01 crc kubenswrapper[4813]: I1202 10:12:01.378725 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:12:02 crc kubenswrapper[4813]: I1202 10:12:02.061480 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:12:03 crc kubenswrapper[4813]: I1202 10:12:03.083470 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:12:03 crc kubenswrapper[4813]: I1202 10:12:03.083566 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:12:03 crc kubenswrapper[4813]: I1202 10:12:03.090379 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:12:05 crc kubenswrapper[4813]: I1202 10:12:05.125025 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:12:05 crc kubenswrapper[4813]: I1202 10:12:05.129415 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:12:06 crc kubenswrapper[4813]: I1202 10:12:06.387644 4813 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:12:06 crc kubenswrapper[4813]: I1202 10:12:06.403175 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:12:06 crc kubenswrapper[4813]: I1202 10:12:06.403209 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:12:06 crc kubenswrapper[4813]: I1202 10:12:06.409001 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:12:06 crc kubenswrapper[4813]: I1202 10:12:06.595908 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eaa5ff21-2bdb-4221-8800-e1917d024971" Dec 02 10:12:07 crc kubenswrapper[4813]: I1202 10:12:07.407909 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:12:07 crc kubenswrapper[4813]: I1202 10:12:07.407937 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:12:07 crc kubenswrapper[4813]: I1202 10:12:07.411570 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eaa5ff21-2bdb-4221-8800-e1917d024971" Dec 02 10:12:12 crc kubenswrapper[4813]: I1202 10:12:12.077559 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 10:12:15 crc kubenswrapper[4813]: I1202 10:12:15.648177 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 10:12:16 crc kubenswrapper[4813]: I1202 10:12:16.482948 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 10:12:16 crc kubenswrapper[4813]: I1202 10:12:16.845431 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 10:12:17 crc kubenswrapper[4813]: I1202 10:12:17.002191 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 10:12:17 crc kubenswrapper[4813]: I1202 10:12:17.295960 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 10:12:17 crc kubenswrapper[4813]: I1202 10:12:17.696118 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 10:12:17 crc kubenswrapper[4813]: I1202 10:12:17.855246 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 10:12:18 crc kubenswrapper[4813]: I1202 10:12:18.217596 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 10:12:18 crc kubenswrapper[4813]: I1202 10:12:18.423223 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 10:12:18 crc kubenswrapper[4813]: I1202 10:12:18.536634 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 10:12:18 crc kubenswrapper[4813]: I1202 10:12:18.644021 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 10:12:18 crc kubenswrapper[4813]: I1202 10:12:18.661375 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 10:12:18 crc kubenswrapper[4813]: I1202 10:12:18.816033 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 10:12:18 crc kubenswrapper[4813]: I1202 10:12:18.992478 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.035225 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.250135 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.253571 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.482496 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.639124 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.727319 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.812713 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.885164 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.941000 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 10:12:19 crc kubenswrapper[4813]: I1202 10:12:19.967548 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.088335 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.193010 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.200178 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.201405 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.285971 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.287767 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.292254 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.415646 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.445332 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.544933 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.555446 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.644611 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.709363 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.825899 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 10:12:20 crc kubenswrapper[4813]: I1202 10:12:20.936754 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.038242 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.048420 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.062234 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.116592 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.254274 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.261592 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.403595 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.569436 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.641052 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.656182 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.678956 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.961633 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 10:12:21 crc kubenswrapper[4813]: I1202 10:12:21.984796 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.034700 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.035642 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.091772 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.191872 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.233433 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.239029 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.240107 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.304085 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.391037 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.493558 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.530479 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.538037 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.544349 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.563666 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 10:12:22 crc kubenswrapper[4813]: I1202 10:12:22.798084 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.057915 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.068496 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.082751 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.084517 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.084498388 podStartE2EDuration="40.084498388s" podCreationTimestamp="2025-12-02 10:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:12:06.456306026 +0000 UTC m=+250.651480328" watchObservedRunningTime="2025-12-02 10:12:23.084498388 +0000 UTC m=+267.279672690" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.087039 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g8r9r","openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.087180 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.087205 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ffq5"] Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.087422 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ffq5" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="registry-server" containerID="cri-o://8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076" gracePeriod=2 Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.087739 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.087767 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2a43cba-eadf-448d-9f26-f8a245a3d76d" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.106985 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.106969135 podStartE2EDuration="17.106969135s" podCreationTimestamp="2025-12-02 10:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:12:23.106137051 +0000 UTC m=+267.301311353" watchObservedRunningTime="2025-12-02 10:12:23.106969135 +0000 UTC m=+267.302143437" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.111227 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.174218 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.187789 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.232874 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.269116 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.276063 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.344512 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.369781 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.405723 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.462315 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.500833 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.506013 4813 generic.go:334] "Generic (PLEG): container finished" podID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerID="8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076" exitCode=0 Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.506116 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ffq5" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.506152 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ffq5" event={"ID":"526e97b9-958b-4fc1-859b-5b0c10d093c5","Type":"ContainerDied","Data":"8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076"} Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.506211 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ffq5" event={"ID":"526e97b9-958b-4fc1-859b-5b0c10d093c5","Type":"ContainerDied","Data":"54219f97af13171180a318ba8eabab763252dff9f07ca45d402de27184232321"} Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.506238 4813 scope.go:117] "RemoveContainer" containerID="8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.512462 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.521433 4813 scope.go:117] "RemoveContainer" containerID="2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.542532 4813 scope.go:117] "RemoveContainer" containerID="5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.566930 4813 scope.go:117] "RemoveContainer" containerID="8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076" Dec 02 10:12:23 crc kubenswrapper[4813]: E1202 10:12:23.567718 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076\": container with ID starting with 8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076 not found: ID does not exist" containerID="8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.567761 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076"} err="failed to get container status \"8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076\": rpc error: code = NotFound desc = could not find container \"8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076\": container with ID starting with 8e9de8079a2b4bd3cb22adc7b42388f90283a15a919c2824935f8813761b9076 not found: ID does not exist" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.567790 4813 scope.go:117] "RemoveContainer" containerID="2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b" Dec 02 10:12:23 crc kubenswrapper[4813]: E1202 10:12:23.570450 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b\": container with ID starting with 2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b not found: ID does not exist" containerID="2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.570529 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b"} err="failed to get container status \"2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b\": rpc error: code = NotFound desc = could not find container \"2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b\": container with ID starting with 2e4d56441ca545c26981de293f0325256b0d7aafbe402e25363bdfacccef942b not found: ID does not exist" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.570562 4813 scope.go:117] "RemoveContainer" containerID="5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6" Dec 02 10:12:23 crc kubenswrapper[4813]: E1202 10:12:23.570944 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6\": container with ID starting with 5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6 not found: ID does not exist" containerID="5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.571236 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6"} err="failed to get container status \"5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6\": rpc error: code = NotFound desc = could not find container \"5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6\": container with ID starting with 5b3e8d230dac9314cedb0bf09027bbf4439342d7fe27709d28278a5c5391fcc6 not found: ID does not exist" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.608748 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5ntb\" (UniqueName: \"kubernetes.io/projected/526e97b9-958b-4fc1-859b-5b0c10d093c5-kube-api-access-d5ntb\") pod \"526e97b9-958b-4fc1-859b-5b0c10d093c5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.608834 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-catalog-content\") pod \"526e97b9-958b-4fc1-859b-5b0c10d093c5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.608898 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-utilities\") pod \"526e97b9-958b-4fc1-859b-5b0c10d093c5\" (UID: \"526e97b9-958b-4fc1-859b-5b0c10d093c5\") " Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.610340 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-utilities" (OuterVolumeSpecName: "utilities") pod "526e97b9-958b-4fc1-859b-5b0c10d093c5" (UID: "526e97b9-958b-4fc1-859b-5b0c10d093c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.617700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526e97b9-958b-4fc1-859b-5b0c10d093c5-kube-api-access-d5ntb" (OuterVolumeSpecName: "kube-api-access-d5ntb") pod "526e97b9-958b-4fc1-859b-5b0c10d093c5" (UID: "526e97b9-958b-4fc1-859b-5b0c10d093c5"). InnerVolumeSpecName "kube-api-access-d5ntb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.625827 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.637334 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.701744 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.709973 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5ntb\" (UniqueName: \"kubernetes.io/projected/526e97b9-958b-4fc1-859b-5b0c10d093c5-kube-api-access-d5ntb\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.710031 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.723393 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "526e97b9-958b-4fc1-859b-5b0c10d093c5" (UID: "526e97b9-958b-4fc1-859b-5b0c10d093c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.810722 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526e97b9-958b-4fc1-859b-5b0c10d093c5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.811842 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.820809 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.824956 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.829980 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.836097 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ffq5"] Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.839022 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ffq5"] Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.927117 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 10:12:23 crc kubenswrapper[4813]: I1202 10:12:23.939093 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.064050 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.074481 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" path="/var/lib/kubelet/pods/526e97b9-958b-4fc1-859b-5b0c10d093c5/volumes" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.075351 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" path="/var/lib/kubelet/pods/c5909f8e-1a62-455a-a85a-73d85747e3a7/volumes" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.091522 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.091933 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.107824 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.111263 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.161921 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.167447 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.226756 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.235630 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.282208 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.293495 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.333930 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.410441 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.415455 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.453003 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.630912 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.834590 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.857588 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.929875 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.935814 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.974030 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 10:12:24 crc kubenswrapper[4813]: I1202 10:12:24.997564 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.040238 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.053645 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.101887 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.114868 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.122717 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169016 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-985c66b4-j25kq"] Dec 02 10:12:25 crc kubenswrapper[4813]: E1202 10:12:25.169277 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" containerName="oauth-openshift" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169293 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" containerName="oauth-openshift" Dec 02 10:12:25 crc kubenswrapper[4813]: E1202 10:12:25.169306 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="extract-utilities" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169313 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="extract-utilities" Dec 02 10:12:25 crc kubenswrapper[4813]: E1202 10:12:25.169323 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" containerName="installer" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169331 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" containerName="installer" Dec 02 10:12:25 crc kubenswrapper[4813]: E1202 10:12:25.169344 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="registry-server" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169351 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="registry-server" Dec 02 10:12:25 crc kubenswrapper[4813]: E1202 10:12:25.169364 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="extract-content" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169372 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="extract-content" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169468 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5909f8e-1a62-455a-a85a-73d85747e3a7" containerName="oauth-openshift" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169480 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="526e97b9-958b-4fc1-859b-5b0c10d093c5" containerName="registry-server" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169488 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="504f99d9-c9d5-4aa5-a816-d8b54033d4eb" containerName="installer" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.169854 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.171772 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.171812 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.173054 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.173056 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.173175 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.173281 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.173366 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.173388 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.174185 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.174371 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.175144 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.175541 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.185825 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.186159 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.194342 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.199590 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.226968 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.227342 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.227465 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-login\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.227589 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m86zf\" (UniqueName: \"kubernetes.io/projected/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-kube-api-access-m86zf\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.227713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-session\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.227851 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.227983 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-audit-policies\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.228108 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.228236 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-error\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.228350 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.228464 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.228569 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.228689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.228808 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-audit-dir\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.287208 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330676 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-login\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330709 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m86zf\" (UniqueName: \"kubernetes.io/projected/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-kube-api-access-m86zf\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-session\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330790 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-audit-policies\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330811 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330832 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-error\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330852 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330875 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330898 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330928 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.330960 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-audit-dir\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.331039 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-audit-dir\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.332820 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-audit-policies\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.332888 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.332948 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.333170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-service-ca\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.336502 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-session\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.336521 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.336727 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.336967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.337270 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-error\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.337635 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.338474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-system-router-certs\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.338883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-v4-0-config-user-template-login\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.341121 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.350405 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m86zf\" (UniqueName: \"kubernetes.io/projected/7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2-kube-api-access-m86zf\") pod \"oauth-openshift-985c66b4-j25kq\" (UID: \"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2\") " pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.365038 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.390605 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.457240 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.485556 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.487264 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.495606 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.505384 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.565364 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.640686 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.697809 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.701978 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.869219 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.890086 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.896807 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 10:12:25 crc kubenswrapper[4813]: I1202 10:12:25.994942 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.009120 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.062281 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.082553 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.124262 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.175004 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.258753 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.460351 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.547678 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.555048 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.566742 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.628436 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.660211 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.669383 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.722221 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.738547 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.799347 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.820507 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.926185 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.934134 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.981619 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 10:12:26 crc kubenswrapper[4813]: I1202 10:12:26.992416 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.006309 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.028669 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.029298 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.056454 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.118854 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.188218 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.221195 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.307059 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.309329 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.324545 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.449280 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.481602 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.491413 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.562743 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.599124 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.609546 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.693214 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.704984 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.707781 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.745516 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.860530 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.889235 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.889741 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 10:12:27 crc kubenswrapper[4813]: I1202 10:12:27.903983 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.112251 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.186100 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.214237 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.274172 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.276793 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.403167 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.477220 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.489914 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.616434 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.676215 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.690997 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.798476 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.798710 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d" gracePeriod=5 Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.870032 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.881691 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 10:12:28 crc kubenswrapper[4813]: I1202 10:12:28.970714 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.133738 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.343360 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.348028 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.581409 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.687286 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.688361 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.708195 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.792400 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.818060 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 10:12:29 crc kubenswrapper[4813]: I1202 10:12:29.968607 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.192987 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.267171 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.323006 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.366349 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.438875 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.489228 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.509563 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.509740 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.613224 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.916215 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 10:12:30 crc kubenswrapper[4813]: I1202 10:12:30.926584 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.008155 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.153713 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.160366 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.161609 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.188724 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.218192 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.316791 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.490856 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.659369 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.887394 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 10:12:31 crc kubenswrapper[4813]: I1202 10:12:31.987890 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.363365 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.363724 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544345 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544456 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544526 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544601 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544625 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544697 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544654 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544799 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544975 4813 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.544989 4813 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.545002 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.545013 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.552367 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.560781 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.560856 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d" exitCode=137 Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.560915 4813 scope.go:117] "RemoveContainer" containerID="47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.561053 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.598517 4813 scope.go:117] "RemoveContainer" containerID="47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d" Dec 02 10:12:34 crc kubenswrapper[4813]: E1202 10:12:34.598929 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d\": container with ID starting with 47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d not found: ID does not exist" containerID="47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.598972 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d"} err="failed to get container status \"47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d\": rpc error: code = NotFound desc = could not find container \"47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d\": container with ID starting with 47b1e522bf1618a29e2bde3450c1a3b296d2642add0545c68e8b0dfe31ac412d not found: ID does not exist" Dec 02 10:12:34 crc kubenswrapper[4813]: I1202 10:12:34.645758 4813 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:36 crc kubenswrapper[4813]: I1202 10:12:36.075246 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 02 10:12:36 crc kubenswrapper[4813]: I1202 10:12:36.075715 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 02 10:12:36 crc kubenswrapper[4813]: I1202 10:12:36.085194 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:12:36 crc kubenswrapper[4813]: I1202 10:12:36.085283 4813 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4dd6919e-eddf-4c2b-8767-86af2d533f31" Dec 02 10:12:36 crc kubenswrapper[4813]: I1202 10:12:36.089027 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 10:12:36 crc kubenswrapper[4813]: I1202 10:12:36.089104 4813 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4dd6919e-eddf-4c2b-8767-86af2d533f31" Dec 02 10:12:40 crc kubenswrapper[4813]: I1202 10:12:40.954373 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 10:12:45 crc kubenswrapper[4813]: I1202 10:12:45.172517 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 10:12:45 crc kubenswrapper[4813]: I1202 10:12:45.336798 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 10:12:45 crc kubenswrapper[4813]: I1202 10:12:45.908887 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 10:12:47 crc kubenswrapper[4813]: I1202 10:12:47.631805 4813 generic.go:334] "Generic (PLEG): container finished" podID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerID="2c85e95c5e5841e150d6e640e24feac3189582149103b30ec543428b950b2b5a" exitCode=0 Dec 02 10:12:47 crc kubenswrapper[4813]: I1202 10:12:47.631966 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" event={"ID":"03ddc93f-c104-482e-a615-1f6ce52c62b8","Type":"ContainerDied","Data":"2c85e95c5e5841e150d6e640e24feac3189582149103b30ec543428b950b2b5a"} Dec 02 10:12:47 crc kubenswrapper[4813]: I1202 10:12:47.632874 4813 scope.go:117] "RemoveContainer" containerID="2c85e95c5e5841e150d6e640e24feac3189582149103b30ec543428b950b2b5a" Dec 02 10:12:48 crc kubenswrapper[4813]: I1202 10:12:48.639012 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" event={"ID":"03ddc93f-c104-482e-a615-1f6ce52c62b8","Type":"ContainerStarted","Data":"565733c86343a251acdbc84e2ba15c8cdc61c337ccffe79a68a7c85a7d56d4f6"} Dec 02 10:12:48 crc kubenswrapper[4813]: I1202 10:12:48.639674 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:12:48 crc kubenswrapper[4813]: I1202 10:12:48.643981 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:12:48 crc kubenswrapper[4813]: I1202 10:12:48.848644 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 10:12:52 crc kubenswrapper[4813]: I1202 10:12:52.366065 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 10:12:55 crc kubenswrapper[4813]: I1202 10:12:55.138255 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfw69"] Dec 02 10:12:55 crc kubenswrapper[4813]: I1202 10:12:55.138649 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" podUID="17a5f145-950f-4585-a991-6bbe400f41d3" containerName="controller-manager" containerID="cri-o://4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df" gracePeriod=30 Dec 02 10:12:55 crc kubenswrapper[4813]: I1202 10:12:55.252982 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2"] Dec 02 10:12:55 crc kubenswrapper[4813]: I1202 10:12:55.676673 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" podUID="e30a4ae1-71f0-4065-8e7a-e75e2588aeac" containerName="route-controller-manager" containerID="cri-o://9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf" gracePeriod=30 Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.231555 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.423609 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-client-ca\") pod \"17a5f145-950f-4585-a991-6bbe400f41d3\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.423680 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl5fs\" (UniqueName: \"kubernetes.io/projected/17a5f145-950f-4585-a991-6bbe400f41d3-kube-api-access-gl5fs\") pod \"17a5f145-950f-4585-a991-6bbe400f41d3\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.423715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-config\") pod \"17a5f145-950f-4585-a991-6bbe400f41d3\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.423783 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-proxy-ca-bundles\") pod \"17a5f145-950f-4585-a991-6bbe400f41d3\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.423822 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17a5f145-950f-4585-a991-6bbe400f41d3-serving-cert\") pod \"17a5f145-950f-4585-a991-6bbe400f41d3\" (UID: \"17a5f145-950f-4585-a991-6bbe400f41d3\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.425100 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-client-ca" (OuterVolumeSpecName: "client-ca") pod "17a5f145-950f-4585-a991-6bbe400f41d3" (UID: "17a5f145-950f-4585-a991-6bbe400f41d3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.425219 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "17a5f145-950f-4585-a991-6bbe400f41d3" (UID: "17a5f145-950f-4585-a991-6bbe400f41d3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.425364 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-config" (OuterVolumeSpecName: "config") pod "17a5f145-950f-4585-a991-6bbe400f41d3" (UID: "17a5f145-950f-4585-a991-6bbe400f41d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.432128 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a5f145-950f-4585-a991-6bbe400f41d3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17a5f145-950f-4585-a991-6bbe400f41d3" (UID: "17a5f145-950f-4585-a991-6bbe400f41d3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.432626 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a5f145-950f-4585-a991-6bbe400f41d3-kube-api-access-gl5fs" (OuterVolumeSpecName: "kube-api-access-gl5fs") pod "17a5f145-950f-4585-a991-6bbe400f41d3" (UID: "17a5f145-950f-4585-a991-6bbe400f41d3"). InnerVolumeSpecName "kube-api-access-gl5fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.437240 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.526863 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.526908 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17a5f145-950f-4585-a991-6bbe400f41d3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.526919 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.526931 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl5fs\" (UniqueName: \"kubernetes.io/projected/17a5f145-950f-4585-a991-6bbe400f41d3-kube-api-access-gl5fs\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.526947 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a5f145-950f-4585-a991-6bbe400f41d3-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.557508 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.683888 4813 generic.go:334] "Generic (PLEG): container finished" podID="17a5f145-950f-4585-a991-6bbe400f41d3" containerID="4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df" exitCode=0 Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.683968 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.684007 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" event={"ID":"17a5f145-950f-4585-a991-6bbe400f41d3","Type":"ContainerDied","Data":"4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df"} Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.684038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bfw69" event={"ID":"17a5f145-950f-4585-a991-6bbe400f41d3","Type":"ContainerDied","Data":"58f8a37497c8cf5654579aba870327e374a8881a738c08cbb7aa0321164ff6d1"} Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.684088 4813 scope.go:117] "RemoveContainer" containerID="4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.685730 4813 generic.go:334] "Generic (PLEG): container finished" podID="e30a4ae1-71f0-4065-8e7a-e75e2588aeac" containerID="9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf" exitCode=0 Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.685773 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" event={"ID":"e30a4ae1-71f0-4065-8e7a-e75e2588aeac","Type":"ContainerDied","Data":"9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf"} Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.685798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" event={"ID":"e30a4ae1-71f0-4065-8e7a-e75e2588aeac","Type":"ContainerDied","Data":"de4e9b54dc7292cc7d17ef05e00f3a8f4edb3dfd8dfa75563fce19d43f6099ab"} Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.685851 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.709412 4813 scope.go:117] "RemoveContainer" containerID="4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df" Dec 02 10:12:56 crc kubenswrapper[4813]: E1202 10:12:56.710095 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df\": container with ID starting with 4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df not found: ID does not exist" containerID="4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.710128 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df"} err="failed to get container status \"4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df\": rpc error: code = NotFound desc = could not find container \"4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df\": container with ID starting with 4b47abdecd8b653c3eb09ed88f22ea18a2598dd3dc54ff7b0df303b292ea59df not found: ID does not exist" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.710159 4813 scope.go:117] "RemoveContainer" containerID="9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.710907 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfw69"] Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.714820 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bfw69"] Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.728812 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f7qj\" (UniqueName: \"kubernetes.io/projected/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-kube-api-access-2f7qj\") pod \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.728870 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-config\") pod \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.728948 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-client-ca\") pod \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.729030 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-serving-cert\") pod \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\" (UID: \"e30a4ae1-71f0-4065-8e7a-e75e2588aeac\") " Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.730114 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-client-ca" (OuterVolumeSpecName: "client-ca") pod "e30a4ae1-71f0-4065-8e7a-e75e2588aeac" (UID: "e30a4ae1-71f0-4065-8e7a-e75e2588aeac"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.730125 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-config" (OuterVolumeSpecName: "config") pod "e30a4ae1-71f0-4065-8e7a-e75e2588aeac" (UID: "e30a4ae1-71f0-4065-8e7a-e75e2588aeac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.732263 4813 scope.go:117] "RemoveContainer" containerID="9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf" Dec 02 10:12:56 crc kubenswrapper[4813]: E1202 10:12:56.732861 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf\": container with ID starting with 9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf not found: ID does not exist" containerID="9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.732899 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf"} err="failed to get container status \"9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf\": rpc error: code = NotFound desc = could not find container \"9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf\": container with ID starting with 9c6a02f4d666ece0034385c5d7db2337b96938b3464d91fe97fda2eae28adfaf not found: ID does not exist" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.737310 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-kube-api-access-2f7qj" (OuterVolumeSpecName: "kube-api-access-2f7qj") pod "e30a4ae1-71f0-4065-8e7a-e75e2588aeac" (UID: "e30a4ae1-71f0-4065-8e7a-e75e2588aeac"). InnerVolumeSpecName "kube-api-access-2f7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.737459 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e30a4ae1-71f0-4065-8e7a-e75e2588aeac" (UID: "e30a4ae1-71f0-4065-8e7a-e75e2588aeac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.830179 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.830225 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.830238 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f7qj\" (UniqueName: \"kubernetes.io/projected/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-kube-api-access-2f7qj\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.830254 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e30a4ae1-71f0-4065-8e7a-e75e2588aeac-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:12:56 crc kubenswrapper[4813]: I1202 10:12:56.953704 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.014301 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2"] Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.018404 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-scvc2"] Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.230775 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw"] Dec 02 10:12:57 crc kubenswrapper[4813]: E1202 10:12:57.231248 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a5f145-950f-4585-a991-6bbe400f41d3" containerName="controller-manager" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.231280 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a5f145-950f-4585-a991-6bbe400f41d3" containerName="controller-manager" Dec 02 10:12:57 crc kubenswrapper[4813]: E1202 10:12:57.231321 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30a4ae1-71f0-4065-8e7a-e75e2588aeac" containerName="route-controller-manager" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.231331 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30a4ae1-71f0-4065-8e7a-e75e2588aeac" containerName="route-controller-manager" Dec 02 10:12:57 crc kubenswrapper[4813]: E1202 10:12:57.231345 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.231353 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.231505 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.231530 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e30a4ae1-71f0-4065-8e7a-e75e2588aeac" containerName="route-controller-manager" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.231540 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a5f145-950f-4585-a991-6bbe400f41d3" containerName="controller-manager" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.232261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.234384 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.234412 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6488c7567-59pxp"] Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.234745 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.234867 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.235028 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.235195 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.235540 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.236031 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.237621 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.237618 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.237949 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.237970 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.238691 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.238993 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.247708 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.335995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8qhz\" (UniqueName: \"kubernetes.io/projected/3e928854-10aa-4457-9934-d78b4ac686eb-kube-api-access-r8qhz\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.336064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbd2\" (UniqueName: \"kubernetes.io/projected/5961752a-37a1-4d64-95e6-9181e5960434-kube-api-access-brbd2\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.336159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-config\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.336183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-config\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.336212 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-client-ca\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.336328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5961752a-37a1-4d64-95e6-9181e5960434-serving-cert\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.336653 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e928854-10aa-4457-9934-d78b4ac686eb-serving-cert\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.336747 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-client-ca\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.336770 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-proxy-ca-bundles\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.338716 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6488c7567-59pxp"] Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.342466 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-985c66b4-j25kq"] Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.376451 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw"] Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.437716 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e928854-10aa-4457-9934-d78b4ac686eb-serving-cert\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.437781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-client-ca\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.437810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-proxy-ca-bundles\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.438662 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qhz\" (UniqueName: \"kubernetes.io/projected/3e928854-10aa-4457-9934-d78b4ac686eb-kube-api-access-r8qhz\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.438734 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbd2\" (UniqueName: \"kubernetes.io/projected/5961752a-37a1-4d64-95e6-9181e5960434-kube-api-access-brbd2\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.438792 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-config\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.438818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-config\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.438846 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-client-ca\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.438884 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5961752a-37a1-4d64-95e6-9181e5960434-serving-cert\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.439921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-client-ca\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.440364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-client-ca\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.440608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-config\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.441171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-proxy-ca-bundles\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.441597 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-config\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.444443 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5961752a-37a1-4d64-95e6-9181e5960434-serving-cert\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.444482 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e928854-10aa-4457-9934-d78b4ac686eb-serving-cert\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.456442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbd2\" (UniqueName: \"kubernetes.io/projected/5961752a-37a1-4d64-95e6-9181e5960434-kube-api-access-brbd2\") pod \"controller-manager-6488c7567-59pxp\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.457784 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qhz\" (UniqueName: \"kubernetes.io/projected/3e928854-10aa-4457-9934-d78b4ac686eb-kube-api-access-r8qhz\") pod \"route-controller-manager-d5775f87f-lj7bw\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.557132 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.571290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.583749 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-985c66b4-j25kq"] Dec 02 10:12:57 crc kubenswrapper[4813]: W1202 10:12:57.589879 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c4825b1_eb26_4071_82a3_2ab2a2d5ffc2.slice/crio-7f9d427fcb7f378640bde54749d7366228ce88c81b482d010d71cb31b72cc25f WatchSource:0}: Error finding container 7f9d427fcb7f378640bde54749d7366228ce88c81b482d010d71cb31b72cc25f: Status 404 returned error can't find the container with id 7f9d427fcb7f378640bde54749d7366228ce88c81b482d010d71cb31b72cc25f Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.691904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" event={"ID":"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2","Type":"ContainerStarted","Data":"7f9d427fcb7f378640bde54749d7366228ce88c81b482d010d71cb31b72cc25f"} Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.752575 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw"] Dec 02 10:12:57 crc kubenswrapper[4813]: W1202 10:12:57.757372 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e928854_10aa_4457_9934_d78b4ac686eb.slice/crio-011ebefde832f6f7be7669680d9fbd689bf7441da52c73d31ec830d3eb0619b8 WatchSource:0}: Error finding container 011ebefde832f6f7be7669680d9fbd689bf7441da52c73d31ec830d3eb0619b8: Status 404 returned error can't find the container with id 011ebefde832f6f7be7669680d9fbd689bf7441da52c73d31ec830d3eb0619b8 Dec 02 10:12:57 crc kubenswrapper[4813]: I1202 10:12:57.810117 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6488c7567-59pxp"] Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.076190 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a5f145-950f-4585-a991-6bbe400f41d3" path="/var/lib/kubelet/pods/17a5f145-950f-4585-a991-6bbe400f41d3/volumes" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.077041 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e30a4ae1-71f0-4065-8e7a-e75e2588aeac" path="/var/lib/kubelet/pods/e30a4ae1-71f0-4065-8e7a-e75e2588aeac/volumes" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.251390 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.537874 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.708821 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" event={"ID":"5961752a-37a1-4d64-95e6-9181e5960434","Type":"ContainerStarted","Data":"7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4"} Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.708883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" event={"ID":"5961752a-37a1-4d64-95e6-9181e5960434","Type":"ContainerStarted","Data":"8fbd2b6e2eaa6579743bd4a60a29c1e018c3ab12e7f0a375ad3206829d91521f"} Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.709187 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.710556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" event={"ID":"3e928854-10aa-4457-9934-d78b4ac686eb","Type":"ContainerStarted","Data":"89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9"} Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.710614 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" event={"ID":"3e928854-10aa-4457-9934-d78b4ac686eb","Type":"ContainerStarted","Data":"011ebefde832f6f7be7669680d9fbd689bf7441da52c73d31ec830d3eb0619b8"} Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.711734 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.715581 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" event={"ID":"7c4825b1-eb26-4071-82a3-2ab2a2d5ffc2","Type":"ContainerStarted","Data":"33938a42b1fa375d63534feba654073c2cbd5b9c6e29621d806913b25a7da3f4"} Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.715898 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.719186 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.723372 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.747829 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" podStartSLOduration=3.747811149 podStartE2EDuration="3.747811149s" podCreationTimestamp="2025-12-02 10:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:12:58.745863863 +0000 UTC m=+302.941038165" watchObservedRunningTime="2025-12-02 10:12:58.747811149 +0000 UTC m=+302.942985451" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.767591 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" podStartSLOduration=3.767568037 podStartE2EDuration="3.767568037s" podCreationTimestamp="2025-12-02 10:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:12:58.76628539 +0000 UTC m=+302.961459702" watchObservedRunningTime="2025-12-02 10:12:58.767568037 +0000 UTC m=+302.962742339" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.796523 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" podStartSLOduration=89.79649897 podStartE2EDuration="1m29.79649897s" podCreationTimestamp="2025-12-02 10:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:12:58.789245871 +0000 UTC m=+302.984420173" watchObservedRunningTime="2025-12-02 10:12:58.79649897 +0000 UTC m=+302.991673272" Dec 02 10:12:58 crc kubenswrapper[4813]: I1202 10:12:58.885246 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-985c66b4-j25kq" Dec 02 10:13:01 crc kubenswrapper[4813]: I1202 10:13:01.924431 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 10:13:05 crc kubenswrapper[4813]: I1202 10:13:05.401636 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 10:13:07 crc kubenswrapper[4813]: I1202 10:13:07.016145 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.126337 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw"] Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.127253 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" podUID="3e928854-10aa-4457-9934-d78b4ac686eb" containerName="route-controller-manager" containerID="cri-o://89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9" gracePeriod=30 Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.581373 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.584732 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qhz\" (UniqueName: \"kubernetes.io/projected/3e928854-10aa-4457-9934-d78b4ac686eb-kube-api-access-r8qhz\") pod \"3e928854-10aa-4457-9934-d78b4ac686eb\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.584788 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e928854-10aa-4457-9934-d78b4ac686eb-serving-cert\") pod \"3e928854-10aa-4457-9934-d78b4ac686eb\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.585796 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-client-ca\") pod \"3e928854-10aa-4457-9934-d78b4ac686eb\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.585871 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-config\") pod \"3e928854-10aa-4457-9934-d78b4ac686eb\" (UID: \"3e928854-10aa-4457-9934-d78b4ac686eb\") " Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.586750 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e928854-10aa-4457-9934-d78b4ac686eb" (UID: "3e928854-10aa-4457-9934-d78b4ac686eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.586776 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-config" (OuterVolumeSpecName: "config") pod "3e928854-10aa-4457-9934-d78b4ac686eb" (UID: "3e928854-10aa-4457-9934-d78b4ac686eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.589984 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e928854-10aa-4457-9934-d78b4ac686eb-kube-api-access-r8qhz" (OuterVolumeSpecName: "kube-api-access-r8qhz") pod "3e928854-10aa-4457-9934-d78b4ac686eb" (UID: "3e928854-10aa-4457-9934-d78b4ac686eb"). InnerVolumeSpecName "kube-api-access-r8qhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.590016 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e928854-10aa-4457-9934-d78b4ac686eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e928854-10aa-4457-9934-d78b4ac686eb" (UID: "3e928854-10aa-4457-9934-d78b4ac686eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.686440 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.686482 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8qhz\" (UniqueName: \"kubernetes.io/projected/3e928854-10aa-4457-9934-d78b4ac686eb-kube-api-access-r8qhz\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.686494 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e928854-10aa-4457-9934-d78b4ac686eb-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.686503 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e928854-10aa-4457-9934-d78b4ac686eb-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.814245 4813 generic.go:334] "Generic (PLEG): container finished" podID="3e928854-10aa-4457-9934-d78b4ac686eb" containerID="89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9" exitCode=0 Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.814290 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" event={"ID":"3e928854-10aa-4457-9934-d78b4ac686eb","Type":"ContainerDied","Data":"89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9"} Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.814309 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.814615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw" event={"ID":"3e928854-10aa-4457-9934-d78b4ac686eb","Type":"ContainerDied","Data":"011ebefde832f6f7be7669680d9fbd689bf7441da52c73d31ec830d3eb0619b8"} Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.814636 4813 scope.go:117] "RemoveContainer" containerID="89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.829799 4813 scope.go:117] "RemoveContainer" containerID="89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9" Dec 02 10:13:15 crc kubenswrapper[4813]: E1202 10:13:15.830416 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9\": container with ID starting with 89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9 not found: ID does not exist" containerID="89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.830471 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9"} err="failed to get container status \"89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9\": rpc error: code = NotFound desc = could not find container \"89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9\": container with ID starting with 89b8f6e2c77446ef90fa0f0cd8d62b8cb78fe2207f7b086306e8c5b206b428f9 not found: ID does not exist" Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.845208 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw"] Dec 02 10:13:15 crc kubenswrapper[4813]: I1202 10:13:15.850490 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d5775f87f-lj7bw"] Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.075084 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e928854-10aa-4457-9934-d78b4ac686eb" path="/var/lib/kubelet/pods/3e928854-10aa-4457-9934-d78b4ac686eb/volumes" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.240398 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb"] Dec 02 10:13:16 crc kubenswrapper[4813]: E1202 10:13:16.240638 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e928854-10aa-4457-9934-d78b4ac686eb" containerName="route-controller-manager" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.240655 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e928854-10aa-4457-9934-d78b4ac686eb" containerName="route-controller-manager" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.240782 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e928854-10aa-4457-9934-d78b4ac686eb" containerName="route-controller-manager" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.241248 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.244302 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.244549 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.245510 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.245797 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.246003 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.246447 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.256718 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb"] Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.294124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37563c3e-2e88-418f-aa1e-bc7870becba4-serving-cert\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.294245 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37563c3e-2e88-418f-aa1e-bc7870becba4-client-ca\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.294402 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37563c3e-2e88-418f-aa1e-bc7870becba4-config\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.395923 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37563c3e-2e88-418f-aa1e-bc7870becba4-serving-cert\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.396306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37563c3e-2e88-418f-aa1e-bc7870becba4-client-ca\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.396357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37563c3e-2e88-418f-aa1e-bc7870becba4-config\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.396382 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2h8d\" (UniqueName: \"kubernetes.io/projected/37563c3e-2e88-418f-aa1e-bc7870becba4-kube-api-access-v2h8d\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.397410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37563c3e-2e88-418f-aa1e-bc7870becba4-client-ca\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.397880 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37563c3e-2e88-418f-aa1e-bc7870becba4-config\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.402749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37563c3e-2e88-418f-aa1e-bc7870becba4-serving-cert\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.497564 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2h8d\" (UniqueName: \"kubernetes.io/projected/37563c3e-2e88-418f-aa1e-bc7870becba4-kube-api-access-v2h8d\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.520552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2h8d\" (UniqueName: \"kubernetes.io/projected/37563c3e-2e88-418f-aa1e-bc7870becba4-kube-api-access-v2h8d\") pod \"route-controller-manager-dd9f66bdb-rtztb\" (UID: \"37563c3e-2e88-418f-aa1e-bc7870becba4\") " pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.564410 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:16 crc kubenswrapper[4813]: I1202 10:13:16.967416 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb"] Dec 02 10:13:17 crc kubenswrapper[4813]: I1202 10:13:17.831332 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" event={"ID":"37563c3e-2e88-418f-aa1e-bc7870becba4","Type":"ContainerStarted","Data":"61a58bf413c0335b053d19904f540ebc619d5bcabf71fefbcf9cd7ab1dff8380"} Dec 02 10:13:17 crc kubenswrapper[4813]: I1202 10:13:17.831717 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:17 crc kubenswrapper[4813]: I1202 10:13:17.831748 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" event={"ID":"37563c3e-2e88-418f-aa1e-bc7870becba4","Type":"ContainerStarted","Data":"4de4fd2afc5f0b894f60725cedb545ff094e27db3204952073cee4abd438b811"} Dec 02 10:13:17 crc kubenswrapper[4813]: I1202 10:13:17.837408 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" Dec 02 10:13:17 crc kubenswrapper[4813]: I1202 10:13:17.847290 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dd9f66bdb-rtztb" podStartSLOduration=2.847268712 podStartE2EDuration="2.847268712s" podCreationTimestamp="2025-12-02 10:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:13:17.845084479 +0000 UTC m=+322.040258771" watchObservedRunningTime="2025-12-02 10:13:17.847268712 +0000 UTC m=+322.042443014" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.270610 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4tjcv"] Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.272372 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.287537 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4tjcv"] Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.411158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/896a733f-f3b9-4da7-9353-44f38b6e8809-trusted-ca\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.411223 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/896a733f-f3b9-4da7-9353-44f38b6e8809-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.411248 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/896a733f-f3b9-4da7-9353-44f38b6e8809-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.411513 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.411667 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/896a733f-f3b9-4da7-9353-44f38b6e8809-registry-certificates\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.411749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-bound-sa-token\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.411779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsl5\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-kube-api-access-rvsl5\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.411861 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-registry-tls\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.436564 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.513447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/896a733f-f3b9-4da7-9353-44f38b6e8809-registry-certificates\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.513503 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvsl5\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-kube-api-access-rvsl5\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.513527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-bound-sa-token\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.513561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-registry-tls\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.513609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/896a733f-f3b9-4da7-9353-44f38b6e8809-trusted-ca\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.513631 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/896a733f-f3b9-4da7-9353-44f38b6e8809-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.513651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/896a733f-f3b9-4da7-9353-44f38b6e8809-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.514396 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/896a733f-f3b9-4da7-9353-44f38b6e8809-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.515077 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/896a733f-f3b9-4da7-9353-44f38b6e8809-registry-certificates\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.515266 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/896a733f-f3b9-4da7-9353-44f38b6e8809-trusted-ca\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.520833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-registry-tls\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.523197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/896a733f-f3b9-4da7-9353-44f38b6e8809-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.530107 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-bound-sa-token\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.530132 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvsl5\" (UniqueName: \"kubernetes.io/projected/896a733f-f3b9-4da7-9353-44f38b6e8809-kube-api-access-rvsl5\") pod \"image-registry-66df7c8f76-4tjcv\" (UID: \"896a733f-f3b9-4da7-9353-44f38b6e8809\") " pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.595653 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:40 crc kubenswrapper[4813]: I1202 10:13:40.974700 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4tjcv"] Dec 02 10:13:41 crc kubenswrapper[4813]: I1202 10:13:41.963056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" event={"ID":"896a733f-f3b9-4da7-9353-44f38b6e8809","Type":"ContainerStarted","Data":"861c2db73b7b028bc00edb473d6787ac09d909ae3447b2e3ec3e206275718d94"} Dec 02 10:13:41 crc kubenswrapper[4813]: I1202 10:13:41.963154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" event={"ID":"896a733f-f3b9-4da7-9353-44f38b6e8809","Type":"ContainerStarted","Data":"f1e6202f44c7fbac0658fbeeb0ef4e41c37c295cb8bf952822ea6bcb60f141d9"} Dec 02 10:13:41 crc kubenswrapper[4813]: I1202 10:13:41.982901 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" podStartSLOduration=1.9828794429999999 podStartE2EDuration="1.982879443s" podCreationTimestamp="2025-12-02 10:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:13:41.980938915 +0000 UTC m=+346.176113247" watchObservedRunningTime="2025-12-02 10:13:41.982879443 +0000 UTC m=+346.178053775" Dec 02 10:13:42 crc kubenswrapper[4813]: I1202 10:13:42.967085 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.123949 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6488c7567-59pxp"] Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.124971 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" podUID="5961752a-37a1-4d64-95e6-9181e5960434" containerName="controller-manager" containerID="cri-o://7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4" gracePeriod=30 Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.510421 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.609387 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-config\") pod \"5961752a-37a1-4d64-95e6-9181e5960434\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.609475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbd2\" (UniqueName: \"kubernetes.io/projected/5961752a-37a1-4d64-95e6-9181e5960434-kube-api-access-brbd2\") pod \"5961752a-37a1-4d64-95e6-9181e5960434\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.609501 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-client-ca\") pod \"5961752a-37a1-4d64-95e6-9181e5960434\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.609574 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5961752a-37a1-4d64-95e6-9181e5960434-serving-cert\") pod \"5961752a-37a1-4d64-95e6-9181e5960434\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.609611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-proxy-ca-bundles\") pod \"5961752a-37a1-4d64-95e6-9181e5960434\" (UID: \"5961752a-37a1-4d64-95e6-9181e5960434\") " Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.610346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5961752a-37a1-4d64-95e6-9181e5960434" (UID: "5961752a-37a1-4d64-95e6-9181e5960434"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.610360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-client-ca" (OuterVolumeSpecName: "client-ca") pod "5961752a-37a1-4d64-95e6-9181e5960434" (UID: "5961752a-37a1-4d64-95e6-9181e5960434"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.610465 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-config" (OuterVolumeSpecName: "config") pod "5961752a-37a1-4d64-95e6-9181e5960434" (UID: "5961752a-37a1-4d64-95e6-9181e5960434"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.623297 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5961752a-37a1-4d64-95e6-9181e5960434-kube-api-access-brbd2" (OuterVolumeSpecName: "kube-api-access-brbd2") pod "5961752a-37a1-4d64-95e6-9181e5960434" (UID: "5961752a-37a1-4d64-95e6-9181e5960434"). InnerVolumeSpecName "kube-api-access-brbd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.623315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5961752a-37a1-4d64-95e6-9181e5960434-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5961752a-37a1-4d64-95e6-9181e5960434" (UID: "5961752a-37a1-4d64-95e6-9181e5960434"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.710733 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5961752a-37a1-4d64-95e6-9181e5960434-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.710776 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.710791 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.710803 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbd2\" (UniqueName: \"kubernetes.io/projected/5961752a-37a1-4d64-95e6-9181e5960434-kube-api-access-brbd2\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:55 crc kubenswrapper[4813]: I1202 10:13:55.710811 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5961752a-37a1-4d64-95e6-9181e5960434-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.038344 4813 generic.go:334] "Generic (PLEG): container finished" podID="5961752a-37a1-4d64-95e6-9181e5960434" containerID="7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4" exitCode=0 Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.038400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" event={"ID":"5961752a-37a1-4d64-95e6-9181e5960434","Type":"ContainerDied","Data":"7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4"} Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.038428 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.038457 4813 scope.go:117] "RemoveContainer" containerID="7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.038443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6488c7567-59pxp" event={"ID":"5961752a-37a1-4d64-95e6-9181e5960434","Type":"ContainerDied","Data":"8fbd2b6e2eaa6579743bd4a60a29c1e018c3ab12e7f0a375ad3206829d91521f"} Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.059380 4813 scope.go:117] "RemoveContainer" containerID="7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4" Dec 02 10:13:56 crc kubenswrapper[4813]: E1202 10:13:56.060519 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4\": container with ID starting with 7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4 not found: ID does not exist" containerID="7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.060676 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4"} err="failed to get container status \"7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4\": rpc error: code = NotFound desc = could not find container \"7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4\": container with ID starting with 7cf7bde2512120c751c9c1ebf63b0ca6afa41bf4be367fc29c55a79e29f6d0c4 not found: ID does not exist" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.074620 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6488c7567-59pxp"] Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.077530 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6488c7567-59pxp"] Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.270700 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bc87568b4-5c7vs"] Dec 02 10:13:56 crc kubenswrapper[4813]: E1202 10:13:56.271096 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5961752a-37a1-4d64-95e6-9181e5960434" containerName="controller-manager" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.271117 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5961752a-37a1-4d64-95e6-9181e5960434" containerName="controller-manager" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.271313 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5961752a-37a1-4d64-95e6-9181e5960434" containerName="controller-manager" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.271986 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.274616 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.274891 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.276148 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.276248 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.278207 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.278414 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.282478 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bc87568b4-5c7vs"] Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.285698 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.421549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbjh5\" (UniqueName: \"kubernetes.io/projected/55bb8c3d-53b3-4303-bac5-bc89ca608036-kube-api-access-bbjh5\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.421643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-config\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.421694 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55bb8c3d-53b3-4303-bac5-bc89ca608036-serving-cert\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.421732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-proxy-ca-bundles\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.421762 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-client-ca\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.522928 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-client-ca\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.523048 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbjh5\" (UniqueName: \"kubernetes.io/projected/55bb8c3d-53b3-4303-bac5-bc89ca608036-kube-api-access-bbjh5\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.523119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-config\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.523166 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55bb8c3d-53b3-4303-bac5-bc89ca608036-serving-cert\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.523197 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-proxy-ca-bundles\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.524203 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-client-ca\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.524573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-proxy-ca-bundles\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.525643 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55bb8c3d-53b3-4303-bac5-bc89ca608036-config\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.527178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55bb8c3d-53b3-4303-bac5-bc89ca608036-serving-cert\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.540430 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbjh5\" (UniqueName: \"kubernetes.io/projected/55bb8c3d-53b3-4303-bac5-bc89ca608036-kube-api-access-bbjh5\") pod \"controller-manager-5bc87568b4-5c7vs\" (UID: \"55bb8c3d-53b3-4303-bac5-bc89ca608036\") " pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.600143 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:56 crc kubenswrapper[4813]: I1202 10:13:56.783271 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bc87568b4-5c7vs"] Dec 02 10:13:56 crc kubenswrapper[4813]: W1202 10:13:56.784857 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55bb8c3d_53b3_4303_bac5_bc89ca608036.slice/crio-38d47bdf2b02bb5c4ec1de0133a44e1feb901256549da0589c921452af272d9e WatchSource:0}: Error finding container 38d47bdf2b02bb5c4ec1de0133a44e1feb901256549da0589c921452af272d9e: Status 404 returned error can't find the container with id 38d47bdf2b02bb5c4ec1de0133a44e1feb901256549da0589c921452af272d9e Dec 02 10:13:57 crc kubenswrapper[4813]: I1202 10:13:57.053047 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" event={"ID":"55bb8c3d-53b3-4303-bac5-bc89ca608036","Type":"ContainerStarted","Data":"b06771448bfd4cefb9f70024042ce6b2e300d45c19eee58751a776e5b39f476b"} Dec 02 10:13:57 crc kubenswrapper[4813]: I1202 10:13:57.053135 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" event={"ID":"55bb8c3d-53b3-4303-bac5-bc89ca608036","Type":"ContainerStarted","Data":"38d47bdf2b02bb5c4ec1de0133a44e1feb901256549da0589c921452af272d9e"} Dec 02 10:13:57 crc kubenswrapper[4813]: I1202 10:13:57.053424 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:57 crc kubenswrapper[4813]: I1202 10:13:57.059004 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" Dec 02 10:13:57 crc kubenswrapper[4813]: I1202 10:13:57.076409 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bc87568b4-5c7vs" podStartSLOduration=2.076383087 podStartE2EDuration="2.076383087s" podCreationTimestamp="2025-12-02 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:13:57.074273324 +0000 UTC m=+361.269447626" watchObservedRunningTime="2025-12-02 10:13:57.076383087 +0000 UTC m=+361.271557389" Dec 02 10:13:58 crc kubenswrapper[4813]: I1202 10:13:58.078327 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5961752a-37a1-4d64-95e6-9181e5960434" path="/var/lib/kubelet/pods/5961752a-37a1-4d64-95e6-9181e5960434/volumes" Dec 02 10:13:59 crc kubenswrapper[4813]: E1202 10:13:59.895474 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5961752a_37a1_4d64_95e6_9181e5960434.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:14:00 crc kubenswrapper[4813]: I1202 10:14:00.600581 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4tjcv" Dec 02 10:14:00 crc kubenswrapper[4813]: I1202 10:14:00.673435 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zkbcp"] Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.274064 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.274856 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.605838 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ljxzg"] Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.606279 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ljxzg" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerName="registry-server" containerID="cri-o://99f3a6f4ceb1c3ff32289089741fae6c4572240ba7234aee3c2bccf19af863cf" gracePeriod=30 Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.614437 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4c78n"] Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.614681 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4c78n" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerName="registry-server" containerID="cri-o://2ad929ff6e0a868f83c23feb4519e730dc3d80479bcd9a58e617e930b04ebcba" gracePeriod=30 Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.624318 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tt449"] Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.624554 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" containerID="cri-o://565733c86343a251acdbc84e2ba15c8cdc61c337ccffe79a68a7c85a7d56d4f6" gracePeriod=30 Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.631465 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-csjp9"] Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.631895 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-csjp9" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerName="registry-server" containerID="cri-o://e6a612d4a513f010b2815621410c3da88e479593fd71d49b132a9cc17ea32111" gracePeriod=30 Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.644858 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2t6q"] Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.645278 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g2t6q" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="registry-server" containerID="cri-o://9fe92abf84d632f2c9974f34c46d5f0d0096577a8ac82ce0e60af71d60c856fc" gracePeriod=30 Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.651878 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7fr5"] Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.653241 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.686877 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7fr5"] Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.762121 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.762193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.762219 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvlf\" (UniqueName: \"kubernetes.io/projected/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-kube-api-access-5kvlf\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.863687 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.863735 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvlf\" (UniqueName: \"kubernetes.io/projected/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-kube-api-access-5kvlf\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.863801 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.867400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.872865 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.887375 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvlf\" (UniqueName: \"kubernetes.io/projected/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-kube-api-access-5kvlf\") pod \"marketplace-operator-79b997595-q7fr5\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:04 crc kubenswrapper[4813]: I1202 10:14:04.978237 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.107883 4813 generic.go:334] "Generic (PLEG): container finished" podID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerID="565733c86343a251acdbc84e2ba15c8cdc61c337ccffe79a68a7c85a7d56d4f6" exitCode=0 Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.107961 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" event={"ID":"03ddc93f-c104-482e-a615-1f6ce52c62b8","Type":"ContainerDied","Data":"565733c86343a251acdbc84e2ba15c8cdc61c337ccffe79a68a7c85a7d56d4f6"} Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.108336 4813 scope.go:117] "RemoveContainer" containerID="2c85e95c5e5841e150d6e640e24feac3189582149103b30ec543428b950b2b5a" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.111556 4813 generic.go:334] "Generic (PLEG): container finished" podID="44363502-e734-4d2e-8f4b-eec2442afe63" containerID="9fe92abf84d632f2c9974f34c46d5f0d0096577a8ac82ce0e60af71d60c856fc" exitCode=0 Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.111627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2t6q" event={"ID":"44363502-e734-4d2e-8f4b-eec2442afe63","Type":"ContainerDied","Data":"9fe92abf84d632f2c9974f34c46d5f0d0096577a8ac82ce0e60af71d60c856fc"} Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.114059 4813 generic.go:334] "Generic (PLEG): container finished" podID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerID="e6a612d4a513f010b2815621410c3da88e479593fd71d49b132a9cc17ea32111" exitCode=0 Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.114133 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csjp9" event={"ID":"64b4cb5e-0d3b-437c-8287-599558fd972b","Type":"ContainerDied","Data":"e6a612d4a513f010b2815621410c3da88e479593fd71d49b132a9cc17ea32111"} Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.118772 4813 generic.go:334] "Generic (PLEG): container finished" podID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerID="99f3a6f4ceb1c3ff32289089741fae6c4572240ba7234aee3c2bccf19af863cf" exitCode=0 Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.118830 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljxzg" event={"ID":"72dfffc2-b16b-47e4-9e6c-1c5562e48db0","Type":"ContainerDied","Data":"99f3a6f4ceb1c3ff32289089741fae6c4572240ba7234aee3c2bccf19af863cf"} Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.120874 4813 generic.go:334] "Generic (PLEG): container finished" podID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerID="2ad929ff6e0a868f83c23feb4519e730dc3d80479bcd9a58e617e930b04ebcba" exitCode=0 Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.120900 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c78n" event={"ID":"dc26ee61-a67c-4200-8cd7-4ca46e748fea","Type":"ContainerDied","Data":"2ad929ff6e0a868f83c23feb4519e730dc3d80479bcd9a58e617e930b04ebcba"} Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.184540 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.275484 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics\") pod \"03ddc93f-c104-482e-a615-1f6ce52c62b8\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.275704 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmplb\" (UniqueName: \"kubernetes.io/projected/03ddc93f-c104-482e-a615-1f6ce52c62b8-kube-api-access-fmplb\") pod \"03ddc93f-c104-482e-a615-1f6ce52c62b8\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.275757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca\") pod \"03ddc93f-c104-482e-a615-1f6ce52c62b8\" (UID: \"03ddc93f-c104-482e-a615-1f6ce52c62b8\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.276671 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "03ddc93f-c104-482e-a615-1f6ce52c62b8" (UID: "03ddc93f-c104-482e-a615-1f6ce52c62b8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.281123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "03ddc93f-c104-482e-a615-1f6ce52c62b8" (UID: "03ddc93f-c104-482e-a615-1f6ce52c62b8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.281600 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ddc93f-c104-482e-a615-1f6ce52c62b8-kube-api-access-fmplb" (OuterVolumeSpecName: "kube-api-access-fmplb") pod "03ddc93f-c104-482e-a615-1f6ce52c62b8" (UID: "03ddc93f-c104-482e-a615-1f6ce52c62b8"). InnerVolumeSpecName "kube-api-access-fmplb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.321883 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.377747 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmplb\" (UniqueName: \"kubernetes.io/projected/03ddc93f-c104-482e-a615-1f6ce52c62b8-kube-api-access-fmplb\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.377787 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.377801 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/03ddc93f-c104-482e-a615-1f6ce52c62b8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.430860 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.439832 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.479125 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-catalog-content\") pod \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.479407 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-utilities\") pod \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.479513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmkx\" (UniqueName: \"kubernetes.io/projected/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-kube-api-access-vcmkx\") pod \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\" (UID: \"72dfffc2-b16b-47e4-9e6c-1c5562e48db0\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.483054 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-utilities" (OuterVolumeSpecName: "utilities") pod "72dfffc2-b16b-47e4-9e6c-1c5562e48db0" (UID: "72dfffc2-b16b-47e4-9e6c-1c5562e48db0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.484569 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-kube-api-access-vcmkx" (OuterVolumeSpecName: "kube-api-access-vcmkx") pod "72dfffc2-b16b-47e4-9e6c-1c5562e48db0" (UID: "72dfffc2-b16b-47e4-9e6c-1c5562e48db0"). InnerVolumeSpecName "kube-api-access-vcmkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.542170 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72dfffc2-b16b-47e4-9e6c-1c5562e48db0" (UID: "72dfffc2-b16b-47e4-9e6c-1c5562e48db0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.544371 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7fr5"] Dec 02 10:14:05 crc kubenswrapper[4813]: W1202 10:14:05.550458 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34294f02_e9d2_4cfa_8d54_b87a4b743eb7.slice/crio-8ea8c33cf97a36d5b72aab9e28ce469e97e72810e6a8c94e24d6f4f0664d8f43 WatchSource:0}: Error finding container 8ea8c33cf97a36d5b72aab9e28ce469e97e72810e6a8c94e24d6f4f0664d8f43: Status 404 returned error can't find the container with id 8ea8c33cf97a36d5b72aab9e28ce469e97e72810e6a8c94e24d6f4f0664d8f43 Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.587435 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jb4l\" (UniqueName: \"kubernetes.io/projected/44363502-e734-4d2e-8f4b-eec2442afe63-kube-api-access-2jb4l\") pod \"44363502-e734-4d2e-8f4b-eec2442afe63\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.587506 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-catalog-content\") pod \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.587531 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-utilities\") pod \"44363502-e734-4d2e-8f4b-eec2442afe63\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.587671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-catalog-content\") pod \"44363502-e734-4d2e-8f4b-eec2442afe63\" (UID: \"44363502-e734-4d2e-8f4b-eec2442afe63\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.587771 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-utilities\") pod \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.587799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6z6f\" (UniqueName: \"kubernetes.io/projected/dc26ee61-a67c-4200-8cd7-4ca46e748fea-kube-api-access-j6z6f\") pod \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\" (UID: \"dc26ee61-a67c-4200-8cd7-4ca46e748fea\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.588142 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.588169 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcmkx\" (UniqueName: \"kubernetes.io/projected/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-kube-api-access-vcmkx\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.588183 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72dfffc2-b16b-47e4-9e6c-1c5562e48db0-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.590138 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-utilities" (OuterVolumeSpecName: "utilities") pod "dc26ee61-a67c-4200-8cd7-4ca46e748fea" (UID: "dc26ee61-a67c-4200-8cd7-4ca46e748fea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.591177 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-utilities" (OuterVolumeSpecName: "utilities") pod "44363502-e734-4d2e-8f4b-eec2442afe63" (UID: "44363502-e734-4d2e-8f4b-eec2442afe63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.592107 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc26ee61-a67c-4200-8cd7-4ca46e748fea-kube-api-access-j6z6f" (OuterVolumeSpecName: "kube-api-access-j6z6f") pod "dc26ee61-a67c-4200-8cd7-4ca46e748fea" (UID: "dc26ee61-a67c-4200-8cd7-4ca46e748fea"). InnerVolumeSpecName "kube-api-access-j6z6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.594377 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44363502-e734-4d2e-8f4b-eec2442afe63-kube-api-access-2jb4l" (OuterVolumeSpecName: "kube-api-access-2jb4l") pod "44363502-e734-4d2e-8f4b-eec2442afe63" (UID: "44363502-e734-4d2e-8f4b-eec2442afe63"). InnerVolumeSpecName "kube-api-access-2jb4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.651349 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc26ee61-a67c-4200-8cd7-4ca46e748fea" (UID: "dc26ee61-a67c-4200-8cd7-4ca46e748fea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.676556 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.692892 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.692956 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6z6f\" (UniqueName: \"kubernetes.io/projected/dc26ee61-a67c-4200-8cd7-4ca46e748fea-kube-api-access-j6z6f\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.692973 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jb4l\" (UniqueName: \"kubernetes.io/projected/44363502-e734-4d2e-8f4b-eec2442afe63-kube-api-access-2jb4l\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.692984 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc26ee61-a67c-4200-8cd7-4ca46e748fea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.692996 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.716572 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44363502-e734-4d2e-8f4b-eec2442afe63" (UID: "44363502-e734-4d2e-8f4b-eec2442afe63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.794409 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctwvt\" (UniqueName: \"kubernetes.io/projected/64b4cb5e-0d3b-437c-8287-599558fd972b-kube-api-access-ctwvt\") pod \"64b4cb5e-0d3b-437c-8287-599558fd972b\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.794824 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-utilities\") pod \"64b4cb5e-0d3b-437c-8287-599558fd972b\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.794932 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-catalog-content\") pod \"64b4cb5e-0d3b-437c-8287-599558fd972b\" (UID: \"64b4cb5e-0d3b-437c-8287-599558fd972b\") " Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.795242 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44363502-e734-4d2e-8f4b-eec2442afe63-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.795682 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-utilities" (OuterVolumeSpecName: "utilities") pod "64b4cb5e-0d3b-437c-8287-599558fd972b" (UID: "64b4cb5e-0d3b-437c-8287-599558fd972b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.799722 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b4cb5e-0d3b-437c-8287-599558fd972b-kube-api-access-ctwvt" (OuterVolumeSpecName: "kube-api-access-ctwvt") pod "64b4cb5e-0d3b-437c-8287-599558fd972b" (UID: "64b4cb5e-0d3b-437c-8287-599558fd972b"). InnerVolumeSpecName "kube-api-access-ctwvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.813966 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64b4cb5e-0d3b-437c-8287-599558fd972b" (UID: "64b4cb5e-0d3b-437c-8287-599558fd972b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.896913 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctwvt\" (UniqueName: \"kubernetes.io/projected/64b4cb5e-0d3b-437c-8287-599558fd972b-kube-api-access-ctwvt\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.896959 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:05 crc kubenswrapper[4813]: I1202 10:14:05.896970 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b4cb5e-0d3b-437c-8287-599558fd972b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.130512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4c78n" event={"ID":"dc26ee61-a67c-4200-8cd7-4ca46e748fea","Type":"ContainerDied","Data":"2f6abb6751259b0488f92b2b97797df4bf8aab7df1d4e2a8edc22f597e4609f2"} Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.130563 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4c78n" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.130572 4813 scope.go:117] "RemoveContainer" containerID="2ad929ff6e0a868f83c23feb4519e730dc3d80479bcd9a58e617e930b04ebcba" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.132675 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" event={"ID":"34294f02-e9d2-4cfa-8d54-b87a4b743eb7","Type":"ContainerStarted","Data":"35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb"} Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.132793 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" event={"ID":"34294f02-e9d2-4cfa-8d54-b87a4b743eb7","Type":"ContainerStarted","Data":"8ea8c33cf97a36d5b72aab9e28ce469e97e72810e6a8c94e24d6f4f0664d8f43"} Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.133342 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.135797 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.135794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tt449" event={"ID":"03ddc93f-c104-482e-a615-1f6ce52c62b8","Type":"ContainerDied","Data":"bc30edea639de0ee7f13337036ce5aa547c309cfb539845231d981526fd47c2f"} Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.138296 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2t6q" event={"ID":"44363502-e734-4d2e-8f4b-eec2442afe63","Type":"ContainerDied","Data":"d8e436d4d2dbd5b34d12ebc7209418d6174ffc8f29513335570ee43914a4eb02"} Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.138351 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2t6q" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.139643 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.141510 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-csjp9" event={"ID":"64b4cb5e-0d3b-437c-8287-599558fd972b","Type":"ContainerDied","Data":"d5b4431a4203c56ffd887eae35827fe9f38b2df79fa2684228387e0cca052bcf"} Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.141618 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-csjp9" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.148085 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljxzg" event={"ID":"72dfffc2-b16b-47e4-9e6c-1c5562e48db0","Type":"ContainerDied","Data":"8bcc86ae0081a1db42d3cf7ecfbffd22cee493e4c9ff801b1cc0899826f88d23"} Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.148171 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljxzg" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.155633 4813 scope.go:117] "RemoveContainer" containerID="1ed67d33d13bfe3d383e013e36fb2d59fe7ae06d7a95573848c80adc55cc9bef" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.159418 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" podStartSLOduration=2.159394274 podStartE2EDuration="2.159394274s" podCreationTimestamp="2025-12-02 10:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:14:06.153579809 +0000 UTC m=+370.348754131" watchObservedRunningTime="2025-12-02 10:14:06.159394274 +0000 UTC m=+370.354568576" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.175617 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4c78n"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.181748 4813 scope.go:117] "RemoveContainer" containerID="78d283f491e3443d0855192b51e9d883d79f716dec4790f4c487ead9995133cb" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.188133 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4c78n"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.193060 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2t6q"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.201779 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g2t6q"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.203371 4813 scope.go:117] "RemoveContainer" containerID="565733c86343a251acdbc84e2ba15c8cdc61c337ccffe79a68a7c85a7d56d4f6" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.207214 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ljxzg"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.212342 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ljxzg"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.232920 4813 scope.go:117] "RemoveContainer" containerID="9fe92abf84d632f2c9974f34c46d5f0d0096577a8ac82ce0e60af71d60c856fc" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.248691 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tt449"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.254676 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tt449"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.260425 4813 scope.go:117] "RemoveContainer" containerID="b42530056a58f07329eb640cdf41407605c1d1df0eb82ae8354ff3b726f37954" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.261644 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-csjp9"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.264206 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-csjp9"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.284533 4813 scope.go:117] "RemoveContainer" containerID="389f39e78af5d21b0b879cbdfb6fb617fc4a0ca55a461a3fd359237ee826d2b9" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.300434 4813 scope.go:117] "RemoveContainer" containerID="e6a612d4a513f010b2815621410c3da88e479593fd71d49b132a9cc17ea32111" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.328211 4813 scope.go:117] "RemoveContainer" containerID="9363524a92cace456a56826e194b771bba60f7d94c392e235b305f28bf1eda42" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.354352 4813 scope.go:117] "RemoveContainer" containerID="57820e3607df661497a6686657b33e000e2eafb8d070536b3b808c7ac6384b95" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.368479 4813 scope.go:117] "RemoveContainer" containerID="99f3a6f4ceb1c3ff32289089741fae6c4572240ba7234aee3c2bccf19af863cf" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.387635 4813 scope.go:117] "RemoveContainer" containerID="552a666bfe13888bae6fea22dfd97abe2d32c523fa9c62a752d59b016874b862" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.406877 4813 scope.go:117] "RemoveContainer" containerID="ffd27258a73e766379e60732cc6b1ab49855c5a9c22f377015d62110a178dca3" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824334 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfp6c"] Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824657 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824674 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824692 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="extract-content" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824706 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="extract-content" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824723 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824732 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerName="extract-utilities" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824756 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerName="extract-utilities" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824777 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerName="extract-utilities" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824785 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerName="extract-utilities" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824797 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerName="extract-content" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824805 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerName="extract-content" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824813 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerName="extract-content" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824822 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerName="extract-content" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824830 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="extract-utilities" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824837 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="extract-utilities" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824845 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerName="extract-content" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824851 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerName="extract-content" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824859 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824865 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824875 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824882 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824893 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824900 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.824910 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerName="extract-utilities" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.824918 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerName="extract-utilities" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.825045 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.825060 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.825100 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.825114 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.825129 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" containerName="registry-server" Dec 02 10:14:06 crc kubenswrapper[4813]: E1202 10:14:06.825255 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.825266 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.825391 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" containerName="marketplace-operator" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.826311 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.830069 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.841863 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfp6c"] Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.915724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-catalog-content\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.915870 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmmfq\" (UniqueName: \"kubernetes.io/projected/d7558c9b-25d5-4d97-9b56-3955021119d7-kube-api-access-xmmfq\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:06 crc kubenswrapper[4813]: I1202 10:14:06.915937 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-utilities\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.017248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-catalog-content\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.017324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmmfq\" (UniqueName: \"kubernetes.io/projected/d7558c9b-25d5-4d97-9b56-3955021119d7-kube-api-access-xmmfq\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.017381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-utilities\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.017934 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-catalog-content\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.018024 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-utilities\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.025693 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m6x2q"] Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.026926 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.033838 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.040997 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmmfq\" (UniqueName: \"kubernetes.io/projected/d7558c9b-25d5-4d97-9b56-3955021119d7-kube-api-access-xmmfq\") pod \"redhat-marketplace-wfp6c\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.042939 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m6x2q"] Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.118998 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-catalog-content\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.119096 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6mn8\" (UniqueName: \"kubernetes.io/projected/12baf46c-5044-48de-ae74-b07fdb2241a1-kube-api-access-b6mn8\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.119128 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-utilities\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.158662 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.221775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-catalog-content\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.221849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6mn8\" (UniqueName: \"kubernetes.io/projected/12baf46c-5044-48de-ae74-b07fdb2241a1-kube-api-access-b6mn8\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.221874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-utilities\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.223332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-catalog-content\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.223426 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-utilities\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.242694 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6mn8\" (UniqueName: \"kubernetes.io/projected/12baf46c-5044-48de-ae74-b07fdb2241a1-kube-api-access-b6mn8\") pod \"redhat-operators-m6x2q\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.373019 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.579008 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfp6c"] Dec 02 10:14:07 crc kubenswrapper[4813]: W1202 10:14:07.583419 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7558c9b_25d5_4d97_9b56_3955021119d7.slice/crio-096dbbedf1d3d146c8c1eee476ffe194c6f5497239a2924ff65935548d526a2e WatchSource:0}: Error finding container 096dbbedf1d3d146c8c1eee476ffe194c6f5497239a2924ff65935548d526a2e: Status 404 returned error can't find the container with id 096dbbedf1d3d146c8c1eee476ffe194c6f5497239a2924ff65935548d526a2e Dec 02 10:14:07 crc kubenswrapper[4813]: I1202 10:14:07.785180 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m6x2q"] Dec 02 10:14:07 crc kubenswrapper[4813]: W1202 10:14:07.848942 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12baf46c_5044_48de_ae74_b07fdb2241a1.slice/crio-818d061e2068a3dde2039f03ab82bb966aff0039d316f9b9ced790ac463680f4 WatchSource:0}: Error finding container 818d061e2068a3dde2039f03ab82bb966aff0039d316f9b9ced790ac463680f4: Status 404 returned error can't find the container with id 818d061e2068a3dde2039f03ab82bb966aff0039d316f9b9ced790ac463680f4 Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.074836 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ddc93f-c104-482e-a615-1f6ce52c62b8" path="/var/lib/kubelet/pods/03ddc93f-c104-482e-a615-1f6ce52c62b8/volumes" Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.075698 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44363502-e734-4d2e-8f4b-eec2442afe63" path="/var/lib/kubelet/pods/44363502-e734-4d2e-8f4b-eec2442afe63/volumes" Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.076399 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b4cb5e-0d3b-437c-8287-599558fd972b" path="/var/lib/kubelet/pods/64b4cb5e-0d3b-437c-8287-599558fd972b/volumes" Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.077811 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72dfffc2-b16b-47e4-9e6c-1c5562e48db0" path="/var/lib/kubelet/pods/72dfffc2-b16b-47e4-9e6c-1c5562e48db0/volumes" Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.078418 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc26ee61-a67c-4200-8cd7-4ca46e748fea" path="/var/lib/kubelet/pods/dc26ee61-a67c-4200-8cd7-4ca46e748fea/volumes" Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.177933 4813 generic.go:334] "Generic (PLEG): container finished" podID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerID="2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2" exitCode=0 Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.177998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6x2q" event={"ID":"12baf46c-5044-48de-ae74-b07fdb2241a1","Type":"ContainerDied","Data":"2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2"} Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.178049 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6x2q" event={"ID":"12baf46c-5044-48de-ae74-b07fdb2241a1","Type":"ContainerStarted","Data":"818d061e2068a3dde2039f03ab82bb966aff0039d316f9b9ced790ac463680f4"} Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.181027 4813 generic.go:334] "Generic (PLEG): container finished" podID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerID="3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012" exitCode=0 Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.181181 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfp6c" event={"ID":"d7558c9b-25d5-4d97-9b56-3955021119d7","Type":"ContainerDied","Data":"3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012"} Dec 02 10:14:08 crc kubenswrapper[4813]: I1202 10:14:08.181245 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfp6c" event={"ID":"d7558c9b-25d5-4d97-9b56-3955021119d7","Type":"ContainerStarted","Data":"096dbbedf1d3d146c8c1eee476ffe194c6f5497239a2924ff65935548d526a2e"} Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.188378 4813 generic.go:334] "Generic (PLEG): container finished" podID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerID="69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000" exitCode=0 Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.188516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfp6c" event={"ID":"d7558c9b-25d5-4d97-9b56-3955021119d7","Type":"ContainerDied","Data":"69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000"} Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.223901 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxrcj"] Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.225146 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.227763 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.235102 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxrcj"] Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.357251 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-utilities\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.357426 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-catalog-content\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.357517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbv2f\" (UniqueName: \"kubernetes.io/projected/9685da25-5941-4ea8-8b0d-efa884cdf2ea-kube-api-access-mbv2f\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.423549 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-67hrf"] Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.424560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.428520 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.434414 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-67hrf"] Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.458486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-catalog-content\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.458542 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbv2f\" (UniqueName: \"kubernetes.io/projected/9685da25-5941-4ea8-8b0d-efa884cdf2ea-kube-api-access-mbv2f\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.458613 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-utilities\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.459823 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-catalog-content\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.460571 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-utilities\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.481594 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbv2f\" (UniqueName: \"kubernetes.io/projected/9685da25-5941-4ea8-8b0d-efa884cdf2ea-kube-api-access-mbv2f\") pod \"community-operators-lxrcj\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.541489 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.560245 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lljx\" (UniqueName: \"kubernetes.io/projected/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-kube-api-access-2lljx\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.560341 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-catalog-content\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.560526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-utilities\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.661135 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-utilities\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.661194 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lljx\" (UniqueName: \"kubernetes.io/projected/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-kube-api-access-2lljx\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.661304 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-catalog-content\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.661726 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-utilities\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.661785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-catalog-content\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.686854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lljx\" (UniqueName: \"kubernetes.io/projected/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-kube-api-access-2lljx\") pod \"certified-operators-67hrf\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.744713 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:09 crc kubenswrapper[4813]: I1202 10:14:09.946248 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxrcj"] Dec 02 10:14:09 crc kubenswrapper[4813]: W1202 10:14:09.966360 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9685da25_5941_4ea8_8b0d_efa884cdf2ea.slice/crio-e81c3e110f88a6456c63a23f44223bba61d2675d1c3f9e048ecbeb3d8f7e93ed WatchSource:0}: Error finding container e81c3e110f88a6456c63a23f44223bba61d2675d1c3f9e048ecbeb3d8f7e93ed: Status 404 returned error can't find the container with id e81c3e110f88a6456c63a23f44223bba61d2675d1c3f9e048ecbeb3d8f7e93ed Dec 02 10:14:10 crc kubenswrapper[4813]: E1202 10:14:10.036828 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5961752a_37a1_4d64_95e6_9181e5960434.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:14:10 crc kubenswrapper[4813]: I1202 10:14:10.154381 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-67hrf"] Dec 02 10:14:10 crc kubenswrapper[4813]: I1202 10:14:10.197406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6x2q" event={"ID":"12baf46c-5044-48de-ae74-b07fdb2241a1","Type":"ContainerStarted","Data":"c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b"} Dec 02 10:14:10 crc kubenswrapper[4813]: I1202 10:14:10.198707 4813 generic.go:334] "Generic (PLEG): container finished" podID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerID="0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d" exitCode=0 Dec 02 10:14:10 crc kubenswrapper[4813]: I1202 10:14:10.198769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxrcj" event={"ID":"9685da25-5941-4ea8-8b0d-efa884cdf2ea","Type":"ContainerDied","Data":"0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d"} Dec 02 10:14:10 crc kubenswrapper[4813]: I1202 10:14:10.198795 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxrcj" event={"ID":"9685da25-5941-4ea8-8b0d-efa884cdf2ea","Type":"ContainerStarted","Data":"e81c3e110f88a6456c63a23f44223bba61d2675d1c3f9e048ecbeb3d8f7e93ed"} Dec 02 10:14:10 crc kubenswrapper[4813]: I1202 10:14:10.201570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfp6c" event={"ID":"d7558c9b-25d5-4d97-9b56-3955021119d7","Type":"ContainerStarted","Data":"ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95"} Dec 02 10:14:10 crc kubenswrapper[4813]: W1202 10:14:10.213156 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44ba0b3e_3d1f_4134_8e7e_580f9fc218a9.slice/crio-af4915b5911d56298ab07de406415d50cbc88e1b86c04089c1727ce5c18de1ff WatchSource:0}: Error finding container af4915b5911d56298ab07de406415d50cbc88e1b86c04089c1727ce5c18de1ff: Status 404 returned error can't find the container with id af4915b5911d56298ab07de406415d50cbc88e1b86c04089c1727ce5c18de1ff Dec 02 10:14:10 crc kubenswrapper[4813]: I1202 10:14:10.268963 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfp6c" podStartSLOduration=2.797048529 podStartE2EDuration="4.267971531s" podCreationTimestamp="2025-12-02 10:14:06 +0000 UTC" firstStartedPulling="2025-12-02 10:14:08.183788756 +0000 UTC m=+372.378963058" lastFinishedPulling="2025-12-02 10:14:09.654711758 +0000 UTC m=+373.849886060" observedRunningTime="2025-12-02 10:14:10.26393141 +0000 UTC m=+374.459105742" watchObservedRunningTime="2025-12-02 10:14:10.267971531 +0000 UTC m=+374.463145843" Dec 02 10:14:11 crc kubenswrapper[4813]: I1202 10:14:11.214192 4813 generic.go:334] "Generic (PLEG): container finished" podID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerID="c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b" exitCode=0 Dec 02 10:14:11 crc kubenswrapper[4813]: I1202 10:14:11.214270 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6x2q" event={"ID":"12baf46c-5044-48de-ae74-b07fdb2241a1","Type":"ContainerDied","Data":"c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b"} Dec 02 10:14:11 crc kubenswrapper[4813]: I1202 10:14:11.218657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxrcj" event={"ID":"9685da25-5941-4ea8-8b0d-efa884cdf2ea","Type":"ContainerStarted","Data":"b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071"} Dec 02 10:14:11 crc kubenswrapper[4813]: I1202 10:14:11.221228 4813 generic.go:334] "Generic (PLEG): container finished" podID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerID="278032a7f458ca86de73911a6b252127f33da20924d227b260c58e07cc2bd24d" exitCode=0 Dec 02 10:14:11 crc kubenswrapper[4813]: I1202 10:14:11.221325 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67hrf" event={"ID":"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9","Type":"ContainerDied","Data":"278032a7f458ca86de73911a6b252127f33da20924d227b260c58e07cc2bd24d"} Dec 02 10:14:11 crc kubenswrapper[4813]: I1202 10:14:11.221368 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67hrf" event={"ID":"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9","Type":"ContainerStarted","Data":"af4915b5911d56298ab07de406415d50cbc88e1b86c04089c1727ce5c18de1ff"} Dec 02 10:14:12 crc kubenswrapper[4813]: I1202 10:14:12.228791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6x2q" event={"ID":"12baf46c-5044-48de-ae74-b07fdb2241a1","Type":"ContainerStarted","Data":"b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359"} Dec 02 10:14:12 crc kubenswrapper[4813]: I1202 10:14:12.233259 4813 generic.go:334] "Generic (PLEG): container finished" podID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerID="b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071" exitCode=0 Dec 02 10:14:12 crc kubenswrapper[4813]: I1202 10:14:12.233362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxrcj" event={"ID":"9685da25-5941-4ea8-8b0d-efa884cdf2ea","Type":"ContainerDied","Data":"b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071"} Dec 02 10:14:12 crc kubenswrapper[4813]: I1202 10:14:12.238998 4813 generic.go:334] "Generic (PLEG): container finished" podID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerID="e886201c53c74fe2ff16bcfa65f6874a54a446dd1c9fac7fabf7f4253a6a62fe" exitCode=0 Dec 02 10:14:12 crc kubenswrapper[4813]: I1202 10:14:12.239063 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67hrf" event={"ID":"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9","Type":"ContainerDied","Data":"e886201c53c74fe2ff16bcfa65f6874a54a446dd1c9fac7fabf7f4253a6a62fe"} Dec 02 10:14:12 crc kubenswrapper[4813]: I1202 10:14:12.251659 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m6x2q" podStartSLOduration=1.7513075169999999 podStartE2EDuration="5.251639041s" podCreationTimestamp="2025-12-02 10:14:07 +0000 UTC" firstStartedPulling="2025-12-02 10:14:08.179414075 +0000 UTC m=+372.374588387" lastFinishedPulling="2025-12-02 10:14:11.679745609 +0000 UTC m=+375.874919911" observedRunningTime="2025-12-02 10:14:12.247418055 +0000 UTC m=+376.442592367" watchObservedRunningTime="2025-12-02 10:14:12.251639041 +0000 UTC m=+376.446813333" Dec 02 10:14:13 crc kubenswrapper[4813]: I1202 10:14:13.249967 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxrcj" event={"ID":"9685da25-5941-4ea8-8b0d-efa884cdf2ea","Type":"ContainerStarted","Data":"86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646"} Dec 02 10:14:13 crc kubenswrapper[4813]: I1202 10:14:13.252285 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67hrf" event={"ID":"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9","Type":"ContainerStarted","Data":"fb4099d238df84be70f940c947a06c2d0b1ba398009a19a464b3fd8eee4c00bd"} Dec 02 10:14:13 crc kubenswrapper[4813]: I1202 10:14:13.278992 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxrcj" podStartSLOduration=1.809825322 podStartE2EDuration="4.278971891s" podCreationTimestamp="2025-12-02 10:14:09 +0000 UTC" firstStartedPulling="2025-12-02 10:14:10.200168947 +0000 UTC m=+374.395343249" lastFinishedPulling="2025-12-02 10:14:12.669315506 +0000 UTC m=+376.864489818" observedRunningTime="2025-12-02 10:14:13.278373463 +0000 UTC m=+377.473547775" watchObservedRunningTime="2025-12-02 10:14:13.278971891 +0000 UTC m=+377.474146193" Dec 02 10:14:13 crc kubenswrapper[4813]: I1202 10:14:13.304940 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-67hrf" podStartSLOduration=2.793168671 podStartE2EDuration="4.304920889s" podCreationTimestamp="2025-12-02 10:14:09 +0000 UTC" firstStartedPulling="2025-12-02 10:14:11.222962961 +0000 UTC m=+375.418137273" lastFinishedPulling="2025-12-02 10:14:12.734715189 +0000 UTC m=+376.929889491" observedRunningTime="2025-12-02 10:14:13.300521867 +0000 UTC m=+377.495696169" watchObservedRunningTime="2025-12-02 10:14:13.304920889 +0000 UTC m=+377.500095201" Dec 02 10:14:17 crc kubenswrapper[4813]: I1202 10:14:17.159696 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:17 crc kubenswrapper[4813]: I1202 10:14:17.160337 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:17 crc kubenswrapper[4813]: I1202 10:14:17.202208 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:17 crc kubenswrapper[4813]: I1202 10:14:17.310376 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:14:17 crc kubenswrapper[4813]: I1202 10:14:17.373981 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:17 crc kubenswrapper[4813]: I1202 10:14:17.374143 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:17 crc kubenswrapper[4813]: I1202 10:14:17.414328 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:18 crc kubenswrapper[4813]: I1202 10:14:18.312216 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:14:19 crc kubenswrapper[4813]: I1202 10:14:19.542402 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:19 crc kubenswrapper[4813]: I1202 10:14:19.542782 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:19 crc kubenswrapper[4813]: I1202 10:14:19.586693 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:19 crc kubenswrapper[4813]: I1202 10:14:19.746128 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:19 crc kubenswrapper[4813]: I1202 10:14:19.746524 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:19 crc kubenswrapper[4813]: I1202 10:14:19.786279 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:20 crc kubenswrapper[4813]: E1202 10:14:20.173548 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5961752a_37a1_4d64_95e6_9181e5960434.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:14:20 crc kubenswrapper[4813]: I1202 10:14:20.326289 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:14:20 crc kubenswrapper[4813]: I1202 10:14:20.326367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:14:25 crc kubenswrapper[4813]: I1202 10:14:25.718404 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" podUID="bd3bb4e8-6c34-42b4-b041-54de4c5d219b" containerName="registry" containerID="cri-o://d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817" gracePeriod=30 Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.205498 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.299200 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-trusted-ca\") pod \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.299424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.299483 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-installation-pull-secrets\") pod \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.299512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-bound-sa-token\") pod \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.299557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-ca-trust-extracted\") pod \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.299598 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9dbt\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-kube-api-access-g9dbt\") pod \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.299633 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-tls\") pod \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.299667 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-certificates\") pod \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\" (UID: \"bd3bb4e8-6c34-42b4-b041-54de4c5d219b\") " Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.300780 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bd3bb4e8-6c34-42b4-b041-54de4c5d219b" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.300889 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bd3bb4e8-6c34-42b4-b041-54de4c5d219b" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.312348 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bd3bb4e8-6c34-42b4-b041-54de4c5d219b" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.312945 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bd3bb4e8-6c34-42b4-b041-54de4c5d219b" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.312956 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bd3bb4e8-6c34-42b4-b041-54de4c5d219b" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.313329 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bd3bb4e8-6c34-42b4-b041-54de4c5d219b" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.318404 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bd3bb4e8-6c34-42b4-b041-54de4c5d219b" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.318618 4813 generic.go:334] "Generic (PLEG): container finished" podID="bd3bb4e8-6c34-42b4-b041-54de4c5d219b" containerID="d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817" exitCode=0 Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.318667 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" event={"ID":"bd3bb4e8-6c34-42b4-b041-54de4c5d219b","Type":"ContainerDied","Data":"d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817"} Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.318681 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.318706 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zkbcp" event={"ID":"bd3bb4e8-6c34-42b4-b041-54de4c5d219b","Type":"ContainerDied","Data":"2e3f961dcf44d90aa93bc034f34f04947234ef242b8a863853673a1d5fe7490f"} Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.318731 4813 scope.go:117] "RemoveContainer" containerID="d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.320179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-kube-api-access-g9dbt" (OuterVolumeSpecName: "kube-api-access-g9dbt") pod "bd3bb4e8-6c34-42b4-b041-54de4c5d219b" (UID: "bd3bb4e8-6c34-42b4-b041-54de4c5d219b"). InnerVolumeSpecName "kube-api-access-g9dbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.364996 4813 scope.go:117] "RemoveContainer" containerID="d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817" Dec 02 10:14:26 crc kubenswrapper[4813]: E1202 10:14:26.365613 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817\": container with ID starting with d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817 not found: ID does not exist" containerID="d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.365704 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817"} err="failed to get container status \"d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817\": rpc error: code = NotFound desc = could not find container \"d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817\": container with ID starting with d855bd7e27c4ce6d65fe3e849bdd3c0675efa590a6a2967bcc64016805721817 not found: ID does not exist" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.401747 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.401806 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.401821 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.401833 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9dbt\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-kube-api-access-g9dbt\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.401847 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.401861 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.401876 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd3bb4e8-6c34-42b4-b041-54de4c5d219b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.664287 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zkbcp"] Dec 02 10:14:26 crc kubenswrapper[4813]: I1202 10:14:26.670404 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zkbcp"] Dec 02 10:14:28 crc kubenswrapper[4813]: I1202 10:14:28.074111 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3bb4e8-6c34-42b4-b041-54de4c5d219b" path="/var/lib/kubelet/pods/bd3bb4e8-6c34-42b4-b041-54de4c5d219b/volumes" Dec 02 10:14:30 crc kubenswrapper[4813]: E1202 10:14:30.342544 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5961752a_37a1_4d64_95e6_9181e5960434.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:14:34 crc kubenswrapper[4813]: I1202 10:14:34.273658 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:14:34 crc kubenswrapper[4813]: I1202 10:14:34.274126 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:14:40 crc kubenswrapper[4813]: E1202 10:14:40.476144 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5961752a_37a1_4d64_95e6_9181e5960434.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:14:50 crc kubenswrapper[4813]: E1202 10:14:50.591746 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5961752a_37a1_4d64_95e6_9181e5960434.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.189421 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld"] Dec 02 10:15:00 crc kubenswrapper[4813]: E1202 10:15:00.191388 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3bb4e8-6c34-42b4-b041-54de4c5d219b" containerName="registry" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.191520 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3bb4e8-6c34-42b4-b041-54de4c5d219b" containerName="registry" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.191763 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3bb4e8-6c34-42b4-b041-54de4c5d219b" containerName="registry" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.192376 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.194359 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-secret-volume\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.194443 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjl4\" (UniqueName: \"kubernetes.io/projected/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-kube-api-access-svjl4\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.194483 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-config-volume\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.196245 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.200174 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.210758 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld"] Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.295490 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-secret-volume\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.295840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svjl4\" (UniqueName: \"kubernetes.io/projected/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-kube-api-access-svjl4\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.295872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-config-volume\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.296762 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-config-volume\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.303695 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-secret-volume\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.315326 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svjl4\" (UniqueName: \"kubernetes.io/projected/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-kube-api-access-svjl4\") pod \"collect-profiles-29411175-2xxld\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:00 crc kubenswrapper[4813]: I1202 10:15:00.528485 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:01 crc kubenswrapper[4813]: I1202 10:15:01.009862 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld"] Dec 02 10:15:01 crc kubenswrapper[4813]: I1202 10:15:01.521909 4813 generic.go:334] "Generic (PLEG): container finished" podID="6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b" containerID="ca92fbad47d844692898faa73b0825dfa65e01287964691b0cca51a5c224139c" exitCode=0 Dec 02 10:15:01 crc kubenswrapper[4813]: I1202 10:15:01.522033 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" event={"ID":"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b","Type":"ContainerDied","Data":"ca92fbad47d844692898faa73b0825dfa65e01287964691b0cca51a5c224139c"} Dec 02 10:15:01 crc kubenswrapper[4813]: I1202 10:15:01.522531 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" event={"ID":"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b","Type":"ContainerStarted","Data":"89c875200121a765fbe70e3270846ce44469b0a8f59811a0262695cb194a09e2"} Dec 02 10:15:02 crc kubenswrapper[4813]: I1202 10:15:02.788557 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:02 crc kubenswrapper[4813]: I1202 10:15:02.934779 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-config-volume\") pod \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " Dec 02 10:15:02 crc kubenswrapper[4813]: I1202 10:15:02.934908 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-secret-volume\") pod \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " Dec 02 10:15:02 crc kubenswrapper[4813]: I1202 10:15:02.934949 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svjl4\" (UniqueName: \"kubernetes.io/projected/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-kube-api-access-svjl4\") pod \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\" (UID: \"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b\") " Dec 02 10:15:02 crc kubenswrapper[4813]: I1202 10:15:02.935780 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b" (UID: "6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:15:02 crc kubenswrapper[4813]: I1202 10:15:02.941379 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b" (UID: "6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:15:02 crc kubenswrapper[4813]: I1202 10:15:02.941419 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-kube-api-access-svjl4" (OuterVolumeSpecName: "kube-api-access-svjl4") pod "6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b" (UID: "6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b"). InnerVolumeSpecName "kube-api-access-svjl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:15:03 crc kubenswrapper[4813]: I1202 10:15:03.036832 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:15:03 crc kubenswrapper[4813]: I1202 10:15:03.036903 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:15:03 crc kubenswrapper[4813]: I1202 10:15:03.036918 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svjl4\" (UniqueName: \"kubernetes.io/projected/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b-kube-api-access-svjl4\") on node \"crc\" DevicePath \"\"" Dec 02 10:15:03 crc kubenswrapper[4813]: I1202 10:15:03.537664 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" event={"ID":"6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b","Type":"ContainerDied","Data":"89c875200121a765fbe70e3270846ce44469b0a8f59811a0262695cb194a09e2"} Dec 02 10:15:03 crc kubenswrapper[4813]: I1202 10:15:03.537726 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c875200121a765fbe70e3270846ce44469b0a8f59811a0262695cb194a09e2" Dec 02 10:15:03 crc kubenswrapper[4813]: I1202 10:15:03.537786 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld" Dec 02 10:15:04 crc kubenswrapper[4813]: I1202 10:15:04.273959 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:15:04 crc kubenswrapper[4813]: I1202 10:15:04.274131 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:15:04 crc kubenswrapper[4813]: I1202 10:15:04.274198 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:15:04 crc kubenswrapper[4813]: I1202 10:15:04.274975 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a250494ac86f85dad09fbc8d2f9fa5868e3037f669dedba3f0be3dccef6a657e"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:15:04 crc kubenswrapper[4813]: I1202 10:15:04.275045 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://a250494ac86f85dad09fbc8d2f9fa5868e3037f669dedba3f0be3dccef6a657e" gracePeriod=600 Dec 02 10:15:04 crc kubenswrapper[4813]: I1202 10:15:04.558530 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="a250494ac86f85dad09fbc8d2f9fa5868e3037f669dedba3f0be3dccef6a657e" exitCode=0 Dec 02 10:15:04 crc kubenswrapper[4813]: I1202 10:15:04.559013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"a250494ac86f85dad09fbc8d2f9fa5868e3037f669dedba3f0be3dccef6a657e"} Dec 02 10:15:04 crc kubenswrapper[4813]: I1202 10:15:04.559212 4813 scope.go:117] "RemoveContainer" containerID="c15dc34d0d676e15d3c040a8250bd3693acc1404d7d6bc53da232886edd9750a" Dec 02 10:15:05 crc kubenswrapper[4813]: I1202 10:15:05.566761 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"f36770541fd368938450466b5c9fcefd3238ed79bdbe160003f69720d79c9545"} Dec 02 10:17:04 crc kubenswrapper[4813]: I1202 10:17:04.273648 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:17:04 crc kubenswrapper[4813]: I1202 10:17:04.274233 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:17:34 crc kubenswrapper[4813]: I1202 10:17:34.274231 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:17:34 crc kubenswrapper[4813]: I1202 10:17:34.274695 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:18:04 crc kubenswrapper[4813]: I1202 10:18:04.274020 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:18:04 crc kubenswrapper[4813]: I1202 10:18:04.274634 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:18:04 crc kubenswrapper[4813]: I1202 10:18:04.274695 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:18:04 crc kubenswrapper[4813]: I1202 10:18:04.275601 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f36770541fd368938450466b5c9fcefd3238ed79bdbe160003f69720d79c9545"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:18:04 crc kubenswrapper[4813]: I1202 10:18:04.275682 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://f36770541fd368938450466b5c9fcefd3238ed79bdbe160003f69720d79c9545" gracePeriod=600 Dec 02 10:18:04 crc kubenswrapper[4813]: I1202 10:18:04.532289 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="f36770541fd368938450466b5c9fcefd3238ed79bdbe160003f69720d79c9545" exitCode=0 Dec 02 10:18:04 crc kubenswrapper[4813]: I1202 10:18:04.532357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"f36770541fd368938450466b5c9fcefd3238ed79bdbe160003f69720d79c9545"} Dec 02 10:18:04 crc kubenswrapper[4813]: I1202 10:18:04.532930 4813 scope.go:117] "RemoveContainer" containerID="a250494ac86f85dad09fbc8d2f9fa5868e3037f669dedba3f0be3dccef6a657e" Dec 02 10:18:05 crc kubenswrapper[4813]: I1202 10:18:05.541350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"a79340e298ba50bcecfdcb5460ce49802eb7f560cb68c7596603aaa065bf4488"} Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.744592 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ljd8m"] Dec 02 10:19:47 crc kubenswrapper[4813]: E1202 10:19:47.745467 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b" containerName="collect-profiles" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.745487 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b" containerName="collect-profiles" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.745604 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b" containerName="collect-profiles" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.746986 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ljd8m" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.749513 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.749594 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-gg7h5" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.749600 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.763897 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fm8vw"] Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.764731 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-fm8vw" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.767199 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5x6zw" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.770382 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ljd8m"] Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.784217 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qmcmb"] Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.785198 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.788335 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4j7ms" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.790708 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fm8vw"] Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.802740 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qmcmb"] Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.868467 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9jh9\" (UniqueName: \"kubernetes.io/projected/05e6e8d5-45df-4a70-b05c-01e02b33f594-kube-api-access-s9jh9\") pod \"cert-manager-5b446d88c5-fm8vw\" (UID: \"05e6e8d5-45df-4a70-b05c-01e02b33f594\") " pod="cert-manager/cert-manager-5b446d88c5-fm8vw" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.868628 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd9st\" (UniqueName: \"kubernetes.io/projected/b28391b4-11dc-465d-a4e0-de2b65ea8ce6-kube-api-access-kd9st\") pod \"cert-manager-webhook-5655c58dd6-qmcmb\" (UID: \"b28391b4-11dc-465d-a4e0-de2b65ea8ce6\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.868716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2rd7\" (UniqueName: \"kubernetes.io/projected/27e3cc05-5212-4270-8ef1-d11c69db84aa-kube-api-access-n2rd7\") pod \"cert-manager-cainjector-7f985d654d-ljd8m\" (UID: \"27e3cc05-5212-4270-8ef1-d11c69db84aa\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ljd8m" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.970099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9jh9\" (UniqueName: \"kubernetes.io/projected/05e6e8d5-45df-4a70-b05c-01e02b33f594-kube-api-access-s9jh9\") pod \"cert-manager-5b446d88c5-fm8vw\" (UID: \"05e6e8d5-45df-4a70-b05c-01e02b33f594\") " pod="cert-manager/cert-manager-5b446d88c5-fm8vw" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.970446 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd9st\" (UniqueName: \"kubernetes.io/projected/b28391b4-11dc-465d-a4e0-de2b65ea8ce6-kube-api-access-kd9st\") pod \"cert-manager-webhook-5655c58dd6-qmcmb\" (UID: \"b28391b4-11dc-465d-a4e0-de2b65ea8ce6\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.970568 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2rd7\" (UniqueName: \"kubernetes.io/projected/27e3cc05-5212-4270-8ef1-d11c69db84aa-kube-api-access-n2rd7\") pod \"cert-manager-cainjector-7f985d654d-ljd8m\" (UID: \"27e3cc05-5212-4270-8ef1-d11c69db84aa\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ljd8m" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.991073 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd9st\" (UniqueName: \"kubernetes.io/projected/b28391b4-11dc-465d-a4e0-de2b65ea8ce6-kube-api-access-kd9st\") pod \"cert-manager-webhook-5655c58dd6-qmcmb\" (UID: \"b28391b4-11dc-465d-a4e0-de2b65ea8ce6\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.991275 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2rd7\" (UniqueName: \"kubernetes.io/projected/27e3cc05-5212-4270-8ef1-d11c69db84aa-kube-api-access-n2rd7\") pod \"cert-manager-cainjector-7f985d654d-ljd8m\" (UID: \"27e3cc05-5212-4270-8ef1-d11c69db84aa\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ljd8m" Dec 02 10:19:47 crc kubenswrapper[4813]: I1202 10:19:47.992097 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9jh9\" (UniqueName: \"kubernetes.io/projected/05e6e8d5-45df-4a70-b05c-01e02b33f594-kube-api-access-s9jh9\") pod \"cert-manager-5b446d88c5-fm8vw\" (UID: \"05e6e8d5-45df-4a70-b05c-01e02b33f594\") " pod="cert-manager/cert-manager-5b446d88c5-fm8vw" Dec 02 10:19:48 crc kubenswrapper[4813]: I1202 10:19:48.066146 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ljd8m" Dec 02 10:19:48 crc kubenswrapper[4813]: I1202 10:19:48.083266 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-fm8vw" Dec 02 10:19:48 crc kubenswrapper[4813]: I1202 10:19:48.108474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" Dec 02 10:19:48 crc kubenswrapper[4813]: I1202 10:19:48.307290 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ljd8m"] Dec 02 10:19:48 crc kubenswrapper[4813]: I1202 10:19:48.322656 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:19:48 crc kubenswrapper[4813]: I1202 10:19:48.343969 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fm8vw"] Dec 02 10:19:48 crc kubenswrapper[4813]: I1202 10:19:48.370766 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qmcmb"] Dec 02 10:19:48 crc kubenswrapper[4813]: W1202 10:19:48.373274 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb28391b4_11dc_465d_a4e0_de2b65ea8ce6.slice/crio-732799fe0c57f827706ae0a5f118c475a17557cc5f443fcee124dd68b8b50a74 WatchSource:0}: Error finding container 732799fe0c57f827706ae0a5f118c475a17557cc5f443fcee124dd68b8b50a74: Status 404 returned error can't find the container with id 732799fe0c57f827706ae0a5f118c475a17557cc5f443fcee124dd68b8b50a74 Dec 02 10:19:49 crc kubenswrapper[4813]: I1202 10:19:49.065161 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" event={"ID":"b28391b4-11dc-465d-a4e0-de2b65ea8ce6","Type":"ContainerStarted","Data":"732799fe0c57f827706ae0a5f118c475a17557cc5f443fcee124dd68b8b50a74"} Dec 02 10:19:49 crc kubenswrapper[4813]: I1202 10:19:49.066477 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-fm8vw" event={"ID":"05e6e8d5-45df-4a70-b05c-01e02b33f594","Type":"ContainerStarted","Data":"0e37bb1d8c59b82ba1cafb15fb3d396f64ce3212c4f6c8b67a61bb5a3f10fb02"} Dec 02 10:19:49 crc kubenswrapper[4813]: I1202 10:19:49.067005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ljd8m" event={"ID":"27e3cc05-5212-4270-8ef1-d11c69db84aa","Type":"ContainerStarted","Data":"b741326cd450aad14f76d06a0fee2f2c6ad222729488480d9ba7c5bd772fa0b2"} Dec 02 10:19:53 crc kubenswrapper[4813]: I1202 10:19:53.089361 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" event={"ID":"b28391b4-11dc-465d-a4e0-de2b65ea8ce6","Type":"ContainerStarted","Data":"dc13a5b696c3b9a3631abcb0e627dc1042fe5bbf752977703337de72cb1cb350"} Dec 02 10:19:53 crc kubenswrapper[4813]: I1202 10:19:53.090004 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" Dec 02 10:19:53 crc kubenswrapper[4813]: I1202 10:19:53.091325 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-fm8vw" event={"ID":"05e6e8d5-45df-4a70-b05c-01e02b33f594","Type":"ContainerStarted","Data":"c19e4d42675d7a801339b4b45dd2ea8d69498df47f3d699e69f7b7fa44962a5d"} Dec 02 10:19:53 crc kubenswrapper[4813]: I1202 10:19:53.092773 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ljd8m" event={"ID":"27e3cc05-5212-4270-8ef1-d11c69db84aa","Type":"ContainerStarted","Data":"ab1b4547ed3174806d993f819b285e0f720f96e6098e27d69afc882722434858"} Dec 02 10:19:53 crc kubenswrapper[4813]: I1202 10:19:53.106022 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" podStartSLOduration=2.433756533 podStartE2EDuration="6.106001672s" podCreationTimestamp="2025-12-02 10:19:47 +0000 UTC" firstStartedPulling="2025-12-02 10:19:48.375516118 +0000 UTC m=+712.570690420" lastFinishedPulling="2025-12-02 10:19:52.047761267 +0000 UTC m=+716.242935559" observedRunningTime="2025-12-02 10:19:53.10372956 +0000 UTC m=+717.298903862" watchObservedRunningTime="2025-12-02 10:19:53.106001672 +0000 UTC m=+717.301175984" Dec 02 10:19:53 crc kubenswrapper[4813]: I1202 10:19:53.122687 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-ljd8m" podStartSLOduration=2.402821478 podStartE2EDuration="6.12267344s" podCreationTimestamp="2025-12-02 10:19:47 +0000 UTC" firstStartedPulling="2025-12-02 10:19:48.322428095 +0000 UTC m=+712.517602397" lastFinishedPulling="2025-12-02 10:19:52.042280057 +0000 UTC m=+716.237454359" observedRunningTime="2025-12-02 10:19:53.121818967 +0000 UTC m=+717.316993269" watchObservedRunningTime="2025-12-02 10:19:53.12267344 +0000 UTC m=+717.317847742" Dec 02 10:19:53 crc kubenswrapper[4813]: I1202 10:19:53.136270 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-fm8vw" podStartSLOduration=2.373423837 podStartE2EDuration="6.136253394s" podCreationTimestamp="2025-12-02 10:19:47 +0000 UTC" firstStartedPulling="2025-12-02 10:19:48.345499165 +0000 UTC m=+712.540673467" lastFinishedPulling="2025-12-02 10:19:52.108328722 +0000 UTC m=+716.303503024" observedRunningTime="2025-12-02 10:19:53.133715774 +0000 UTC m=+717.328890076" watchObservedRunningTime="2025-12-02 10:19:53.136253394 +0000 UTC m=+717.331427696" Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.894929 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jj7j"] Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.895857 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovn-controller" containerID="cri-o://93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57" gracePeriod=30 Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.895918 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="nbdb" containerID="cri-o://9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7" gracePeriod=30 Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.895922 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95" gracePeriod=30 Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.896013 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kube-rbac-proxy-node" containerID="cri-o://db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8" gracePeriod=30 Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.896027 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovn-acl-logging" containerID="cri-o://6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1" gracePeriod=30 Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.895952 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="northd" containerID="cri-o://7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb" gracePeriod=30 Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.896151 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="sbdb" containerID="cri-o://3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66" gracePeriod=30 Dec 02 10:19:57 crc kubenswrapper[4813]: I1202 10:19:57.924678 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" containerID="cri-o://539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9" gracePeriod=30 Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.112093 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmcmb" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.833724 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/3.log" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.837378 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovn-acl-logging/0.log" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.837948 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovn-controller/0.log" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.838597 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.886910 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m9p6d"] Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887132 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kube-rbac-proxy-node" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887145 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kube-rbac-proxy-node" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887153 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovn-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887161 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovn-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887167 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887175 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887184 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887193 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887204 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887211 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887221 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kubecfg-setup" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887228 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kubecfg-setup" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887240 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovn-acl-logging" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887247 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovn-acl-logging" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887257 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="sbdb" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887265 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="sbdb" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887276 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887283 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887292 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887301 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887312 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="nbdb" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887317 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="nbdb" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887327 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="northd" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887333 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="northd" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887448 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="northd" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887456 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="nbdb" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887467 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887474 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887481 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="kube-rbac-proxy-node" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887489 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887496 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovn-acl-logging" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887504 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="sbdb" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887510 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovn-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887519 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887527 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887534 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: E1202 10:19:58.887623 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.887632 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerName="ovnkube-controller" Dec 02 10:19:58 crc kubenswrapper[4813]: I1202 10:19:58.889482 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002267 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-netd\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002326 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-log-socket\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002364 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-var-lib-openvswitch\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002383 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002398 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-ovn\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002548 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-etc-openvswitch\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002584 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-node-log\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002604 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovn-node-metrics-cert\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002470 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002479 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-log-socket" (OuterVolumeSpecName: "log-socket") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002483 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002667 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-node-log" (OuterVolumeSpecName: "node-log") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002702 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-kubelet\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002762 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002778 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-systemd-units\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-bin\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-env-overrides\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003318 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002844 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.002875 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003266 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-slash\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003418 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003479 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-slash" (OuterVolumeSpecName: "host-slash") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-script-lib\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003535 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-systemd\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-config\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.003823 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004015 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-ovn-kubernetes\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-openvswitch\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004065 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-netns\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004139 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mllp7\" (UniqueName: \"kubernetes.io/projected/3551771a-22ef-4f85-ad6b-fa4033a3f90f-kube-api-access-mllp7\") pod \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\" (UID: \"3551771a-22ef-4f85-ad6b-fa4033a3f90f\") " Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004173 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004230 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004238 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004277 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004602 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-slash\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-node-log\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004685 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004765 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-env-overrides\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-run-ovn-kubernetes\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004824 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-ovn\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-log-socket\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvb5r\" (UniqueName: \"kubernetes.io/projected/d40fbeab-fea7-4b17-bf09-ca141f43822a-kube-api-access-pvb5r\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004944 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovnkube-config\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.004967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-var-lib-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005005 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-kubelet\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005026 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-systemd\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005047 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-cni-bin\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005100 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005121 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovn-node-metrics-cert\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005147 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-etc-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-systemd-units\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005212 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovnkube-script-lib\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005238 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-run-netns\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005267 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-cni-netd\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005312 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005325 4813 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005335 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005347 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005358 4813 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-log-socket\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005369 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005382 4813 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005393 4813 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005403 4813 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-node-log\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005413 4813 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005424 4813 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005435 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005446 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005457 4813 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005468 4813 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-host-slash\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005480 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.005492 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.009620 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.009908 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3551771a-22ef-4f85-ad6b-fa4033a3f90f-kube-api-access-mllp7" (OuterVolumeSpecName: "kube-api-access-mllp7") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "kube-api-access-mllp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.015812 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3551771a-22ef-4f85-ad6b-fa4033a3f90f" (UID: "3551771a-22ef-4f85-ad6b-fa4033a3f90f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106499 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-log-socket\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106548 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvb5r\" (UniqueName: \"kubernetes.io/projected/d40fbeab-fea7-4b17-bf09-ca141f43822a-kube-api-access-pvb5r\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovnkube-config\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106587 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-var-lib-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106606 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-kubelet\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-systemd\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106641 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-log-socket\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106699 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-cni-bin\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106722 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-kubelet\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106656 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-cni-bin\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-systemd\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106779 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-var-lib-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106789 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovn-node-metrics-cert\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-etc-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-systemd-units\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovnkube-script-lib\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106902 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-run-netns\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106928 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-cni-netd\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106925 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-etc-openvswitch\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106953 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-run-netns\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-slash\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-systemd-units\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.106972 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-slash\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-cni-netd\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-node-log\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107101 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-node-log\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107137 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107190 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-env-overrides\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107221 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-run-ovn-kubernetes\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-ovn\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107291 4813 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3551771a-22ef-4f85-ad6b-fa4033a3f90f-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107294 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-host-run-ovn-kubernetes\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107326 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d40fbeab-fea7-4b17-bf09-ca141f43822a-run-ovn\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107305 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mllp7\" (UniqueName: \"kubernetes.io/projected/3551771a-22ef-4f85-ad6b-fa4033a3f90f-kube-api-access-mllp7\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107369 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3551771a-22ef-4f85-ad6b-fa4033a3f90f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovnkube-config\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.107624 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-env-overrides\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.108019 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovnkube-script-lib\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.110388 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d40fbeab-fea7-4b17-bf09-ca141f43822a-ovn-node-metrics-cert\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.123093 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvb5r\" (UniqueName: \"kubernetes.io/projected/d40fbeab-fea7-4b17-bf09-ca141f43822a-kube-api-access-pvb5r\") pod \"ovnkube-node-m9p6d\" (UID: \"d40fbeab-fea7-4b17-bf09-ca141f43822a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.129734 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/2.log" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.130307 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/1.log" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.130367 4813 generic.go:334] "Generic (PLEG): container finished" podID="30b516bc-ab92-49fb-8f3b-431cf0ef3164" containerID="fa667f6b53370318e088cf15bfd020ef148487c511e3b82ae02d62cdb5a23253" exitCode=2 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.130466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x7cgx" event={"ID":"30b516bc-ab92-49fb-8f3b-431cf0ef3164","Type":"ContainerDied","Data":"fa667f6b53370318e088cf15bfd020ef148487c511e3b82ae02d62cdb5a23253"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.130550 4813 scope.go:117] "RemoveContainer" containerID="b62c975a01605ad5a9af8afe635fb13814bab6feac101078833ad30a84bfa33e" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.132209 4813 scope.go:117] "RemoveContainer" containerID="fa667f6b53370318e088cf15bfd020ef148487c511e3b82ae02d62cdb5a23253" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.132768 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-x7cgx_openshift-multus(30b516bc-ab92-49fb-8f3b-431cf0ef3164)\"" pod="openshift-multus/multus-x7cgx" podUID="30b516bc-ab92-49fb-8f3b-431cf0ef3164" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.133924 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovnkube-controller/3.log" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.136755 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovn-acl-logging/0.log" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.137269 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jj7j_3551771a-22ef-4f85-ad6b-fa4033a3f90f/ovn-controller/0.log" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.137729 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9" exitCode=0 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.137803 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66" exitCode=0 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.137866 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7" exitCode=0 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.137931 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb" exitCode=0 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.137997 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95" exitCode=0 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138055 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8" exitCode=0 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138155 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1" exitCode=143 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138215 4813 generic.go:334] "Generic (PLEG): container finished" podID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" containerID="93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57" exitCode=143 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.137763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138001 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138576 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138689 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138744 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138793 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138842 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138887 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138934 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.138982 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139030 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139102 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139162 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139212 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139284 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139353 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139417 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139477 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139551 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139616 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139668 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139715 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139759 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139806 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.139941 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140020 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140096 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140165 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140237 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140306 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140380 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140451 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140521 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140580 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140639 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jj7j" event={"ID":"3551771a-22ef-4f85-ad6b-fa4033a3f90f","Type":"ContainerDied","Data":"cf723e6189d81899009071094f2bd195b6e42d389fdd5df1a6deb3e4dbc652b6"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140715 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140795 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140859 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140925 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.140977 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.141057 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.141146 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.141203 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.141269 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.141332 4813 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.181290 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jj7j"] Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.184187 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jj7j"] Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.205601 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:19:59 crc kubenswrapper[4813]: W1202 10:19:59.225312 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd40fbeab_fea7_4b17_bf09_ca141f43822a.slice/crio-d282c2cb0444ead9b53f435fcd7852167f20c25b71dec8ffc3efbb020fb68075 WatchSource:0}: Error finding container d282c2cb0444ead9b53f435fcd7852167f20c25b71dec8ffc3efbb020fb68075: Status 404 returned error can't find the container with id d282c2cb0444ead9b53f435fcd7852167f20c25b71dec8ffc3efbb020fb68075 Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.373893 4813 scope.go:117] "RemoveContainer" containerID="539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.405297 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.420094 4813 scope.go:117] "RemoveContainer" containerID="3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.432333 4813 scope.go:117] "RemoveContainer" containerID="9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.445560 4813 scope.go:117] "RemoveContainer" containerID="7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.459233 4813 scope.go:117] "RemoveContainer" containerID="1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.472635 4813 scope.go:117] "RemoveContainer" containerID="db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.483987 4813 scope.go:117] "RemoveContainer" containerID="6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.496899 4813 scope.go:117] "RemoveContainer" containerID="93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.511327 4813 scope.go:117] "RemoveContainer" containerID="6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.523930 4813 scope.go:117] "RemoveContainer" containerID="539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.524403 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": container with ID starting with 539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9 not found: ID does not exist" containerID="539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.524444 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} err="failed to get container status \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": rpc error: code = NotFound desc = could not find container \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": container with ID starting with 539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.524474 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.524850 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": container with ID starting with 1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7 not found: ID does not exist" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.524886 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} err="failed to get container status \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": rpc error: code = NotFound desc = could not find container \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": container with ID starting with 1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.524906 4813 scope.go:117] "RemoveContainer" containerID="3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.525281 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": container with ID starting with 3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66 not found: ID does not exist" containerID="3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.525306 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} err="failed to get container status \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": rpc error: code = NotFound desc = could not find container \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": container with ID starting with 3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.525323 4813 scope.go:117] "RemoveContainer" containerID="9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.525721 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": container with ID starting with 9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7 not found: ID does not exist" containerID="9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.525752 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} err="failed to get container status \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": rpc error: code = NotFound desc = could not find container \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": container with ID starting with 9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.525771 4813 scope.go:117] "RemoveContainer" containerID="7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.526001 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": container with ID starting with 7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb not found: ID does not exist" containerID="7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.526028 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} err="failed to get container status \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": rpc error: code = NotFound desc = could not find container \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": container with ID starting with 7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.526045 4813 scope.go:117] "RemoveContainer" containerID="1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.526439 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": container with ID starting with 1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95 not found: ID does not exist" containerID="1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.526467 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} err="failed to get container status \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": rpc error: code = NotFound desc = could not find container \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": container with ID starting with 1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.526487 4813 scope.go:117] "RemoveContainer" containerID="db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.526714 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": container with ID starting with db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8 not found: ID does not exist" containerID="db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.526739 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} err="failed to get container status \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": rpc error: code = NotFound desc = could not find container \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": container with ID starting with db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.526757 4813 scope.go:117] "RemoveContainer" containerID="6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.527106 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": container with ID starting with 6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1 not found: ID does not exist" containerID="6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.527132 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} err="failed to get container status \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": rpc error: code = NotFound desc = could not find container \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": container with ID starting with 6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.527149 4813 scope.go:117] "RemoveContainer" containerID="93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.527370 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": container with ID starting with 93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57 not found: ID does not exist" containerID="93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.527401 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} err="failed to get container status \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": rpc error: code = NotFound desc = could not find container \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": container with ID starting with 93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.527419 4813 scope.go:117] "RemoveContainer" containerID="6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d" Dec 02 10:19:59 crc kubenswrapper[4813]: E1202 10:19:59.527668 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": container with ID starting with 6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d not found: ID does not exist" containerID="6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.527691 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} err="failed to get container status \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": rpc error: code = NotFound desc = could not find container \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": container with ID starting with 6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.527707 4813 scope.go:117] "RemoveContainer" containerID="539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.527935 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} err="failed to get container status \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": rpc error: code = NotFound desc = could not find container \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": container with ID starting with 539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.527960 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.528191 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} err="failed to get container status \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": rpc error: code = NotFound desc = could not find container \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": container with ID starting with 1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.528213 4813 scope.go:117] "RemoveContainer" containerID="3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.528378 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} err="failed to get container status \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": rpc error: code = NotFound desc = could not find container \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": container with ID starting with 3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.528397 4813 scope.go:117] "RemoveContainer" containerID="9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.528558 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} err="failed to get container status \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": rpc error: code = NotFound desc = could not find container \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": container with ID starting with 9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.528578 4813 scope.go:117] "RemoveContainer" containerID="7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.528919 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} err="failed to get container status \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": rpc error: code = NotFound desc = could not find container \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": container with ID starting with 7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.528938 4813 scope.go:117] "RemoveContainer" containerID="1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.529147 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} err="failed to get container status \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": rpc error: code = NotFound desc = could not find container \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": container with ID starting with 1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.529166 4813 scope.go:117] "RemoveContainer" containerID="db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.529498 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} err="failed to get container status \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": rpc error: code = NotFound desc = could not find container \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": container with ID starting with db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.529522 4813 scope.go:117] "RemoveContainer" containerID="6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.529768 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} err="failed to get container status \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": rpc error: code = NotFound desc = could not find container \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": container with ID starting with 6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.529786 4813 scope.go:117] "RemoveContainer" containerID="93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.529975 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} err="failed to get container status \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": rpc error: code = NotFound desc = could not find container \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": container with ID starting with 93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.529991 4813 scope.go:117] "RemoveContainer" containerID="6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.530269 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} err="failed to get container status \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": rpc error: code = NotFound desc = could not find container \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": container with ID starting with 6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.530288 4813 scope.go:117] "RemoveContainer" containerID="539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.530492 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} err="failed to get container status \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": rpc error: code = NotFound desc = could not find container \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": container with ID starting with 539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.530510 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.530719 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} err="failed to get container status \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": rpc error: code = NotFound desc = could not find container \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": container with ID starting with 1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.530736 4813 scope.go:117] "RemoveContainer" containerID="3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.530951 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} err="failed to get container status \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": rpc error: code = NotFound desc = could not find container \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": container with ID starting with 3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.530967 4813 scope.go:117] "RemoveContainer" containerID="9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.531262 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} err="failed to get container status \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": rpc error: code = NotFound desc = could not find container \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": container with ID starting with 9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.531279 4813 scope.go:117] "RemoveContainer" containerID="7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.531524 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} err="failed to get container status \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": rpc error: code = NotFound desc = could not find container \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": container with ID starting with 7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.531541 4813 scope.go:117] "RemoveContainer" containerID="1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.531741 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} err="failed to get container status \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": rpc error: code = NotFound desc = could not find container \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": container with ID starting with 1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.531757 4813 scope.go:117] "RemoveContainer" containerID="db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.531978 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} err="failed to get container status \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": rpc error: code = NotFound desc = could not find container \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": container with ID starting with db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.531999 4813 scope.go:117] "RemoveContainer" containerID="6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.532264 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} err="failed to get container status \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": rpc error: code = NotFound desc = could not find container \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": container with ID starting with 6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.532282 4813 scope.go:117] "RemoveContainer" containerID="93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.532539 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} err="failed to get container status \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": rpc error: code = NotFound desc = could not find container \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": container with ID starting with 93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.532554 4813 scope.go:117] "RemoveContainer" containerID="6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.532767 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} err="failed to get container status \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": rpc error: code = NotFound desc = could not find container \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": container with ID starting with 6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.532783 4813 scope.go:117] "RemoveContainer" containerID="539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.532976 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9"} err="failed to get container status \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": rpc error: code = NotFound desc = could not find container \"539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9\": container with ID starting with 539b64c1d7261367d5769d2e6437e15833c41d1ad7e65ccc28f2224ae62338f9 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.532991 4813 scope.go:117] "RemoveContainer" containerID="1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.533255 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7"} err="failed to get container status \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": rpc error: code = NotFound desc = could not find container \"1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7\": container with ID starting with 1c328bd0b2430800c2110fce9f6b55417163284cacc6d97e572bfbd51f90bae7 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.533274 4813 scope.go:117] "RemoveContainer" containerID="3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.533517 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66"} err="failed to get container status \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": rpc error: code = NotFound desc = could not find container \"3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66\": container with ID starting with 3a5e5c8e08d5f490f7d08a79e3b094c2ae39b252c0b801ebef2c43d748f53e66 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.533535 4813 scope.go:117] "RemoveContainer" containerID="9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.533733 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7"} err="failed to get container status \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": rpc error: code = NotFound desc = could not find container \"9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7\": container with ID starting with 9a5a5a2bab2240364bd698de329b7b66655198882055ae4548f34c85c92a38b7 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.533748 4813 scope.go:117] "RemoveContainer" containerID="7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.533936 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb"} err="failed to get container status \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": rpc error: code = NotFound desc = could not find container \"7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb\": container with ID starting with 7d5db6deacae8dbcf9dab6f7e6318fada44b30947491596f75ff6b27e9af11cb not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.533953 4813 scope.go:117] "RemoveContainer" containerID="1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.534196 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95"} err="failed to get container status \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": rpc error: code = NotFound desc = could not find container \"1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95\": container with ID starting with 1ae29dbb36d3ad37860f8f9ad5d1e28ab097105fd4d13216e43f42ffabdf4f95 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.534218 4813 scope.go:117] "RemoveContainer" containerID="db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.534424 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8"} err="failed to get container status \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": rpc error: code = NotFound desc = could not find container \"db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8\": container with ID starting with db4dacd7629ba78256b40d7c0d12e4153b5aced41dae6f7f640752eaa2337ab8 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.534440 4813 scope.go:117] "RemoveContainer" containerID="6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.534652 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1"} err="failed to get container status \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": rpc error: code = NotFound desc = could not find container \"6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1\": container with ID starting with 6a75bfbd9309bb2c13b93632f19b0b96e03250f442ebdbc77afa807914e08ac1 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.534672 4813 scope.go:117] "RemoveContainer" containerID="93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.534862 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57"} err="failed to get container status \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": rpc error: code = NotFound desc = could not find container \"93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57\": container with ID starting with 93da109ae21f943b63206131ccf93469a09042c52935e210a51055351284da57 not found: ID does not exist" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.534879 4813 scope.go:117] "RemoveContainer" containerID="6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d" Dec 02 10:19:59 crc kubenswrapper[4813]: I1202 10:19:59.535161 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d"} err="failed to get container status \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": rpc error: code = NotFound desc = could not find container \"6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d\": container with ID starting with 6385731355f55d3f7419b4ceb234edc67988af7ca49da9af8f2c50aa47a7ae5d not found: ID does not exist" Dec 02 10:20:00 crc kubenswrapper[4813]: I1202 10:20:00.075601 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3551771a-22ef-4f85-ad6b-fa4033a3f90f" path="/var/lib/kubelet/pods/3551771a-22ef-4f85-ad6b-fa4033a3f90f/volumes" Dec 02 10:20:00 crc kubenswrapper[4813]: I1202 10:20:00.146633 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/2.log" Dec 02 10:20:00 crc kubenswrapper[4813]: I1202 10:20:00.147961 4813 generic.go:334] "Generic (PLEG): container finished" podID="d40fbeab-fea7-4b17-bf09-ca141f43822a" containerID="d254ce94457ded654c5c68f9c7061a7d407a3ed0e2caf966ef5cb9662b720fef" exitCode=0 Dec 02 10:20:00 crc kubenswrapper[4813]: I1202 10:20:00.147995 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerDied","Data":"d254ce94457ded654c5c68f9c7061a7d407a3ed0e2caf966ef5cb9662b720fef"} Dec 02 10:20:00 crc kubenswrapper[4813]: I1202 10:20:00.148016 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"d282c2cb0444ead9b53f435fcd7852167f20c25b71dec8ffc3efbb020fb68075"} Dec 02 10:20:01 crc kubenswrapper[4813]: I1202 10:20:01.154981 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"9dfad77ae82930a8ba84e19826a61e7d403112df8e4b78f3aaaf6cbb624b02d5"} Dec 02 10:20:01 crc kubenswrapper[4813]: I1202 10:20:01.155515 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"a483958dd9795f0ce0ace7663d0a7527237eb9d23dbd99663fb7db9feaf5b18a"} Dec 02 10:20:01 crc kubenswrapper[4813]: I1202 10:20:01.155532 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"18d0a344ccad75b5e49ad73d6dd4e2f89b8de267306e4735e9369f98e7acc938"} Dec 02 10:20:01 crc kubenswrapper[4813]: I1202 10:20:01.155545 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"e55aae4c211a3c90d48bf837864ae082076f11cabdb66cf85dab9ae7373435f1"} Dec 02 10:20:01 crc kubenswrapper[4813]: I1202 10:20:01.155556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"6fbedab6932b77ee0509e1940a416a2a7854197572b28168d5891f8065966382"} Dec 02 10:20:01 crc kubenswrapper[4813]: I1202 10:20:01.155568 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"a6fb86853eebc69676388df814d1e4e575355ec45149255c41577451abf13b0a"} Dec 02 10:20:03 crc kubenswrapper[4813]: I1202 10:20:03.167927 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"08b87fd5aedaf30a1eaabb05ff720193de9cc1f29091da3f9a815fdb87930e46"} Dec 02 10:20:04 crc kubenswrapper[4813]: I1202 10:20:04.273443 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:20:04 crc kubenswrapper[4813]: I1202 10:20:04.273525 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:20:05 crc kubenswrapper[4813]: I1202 10:20:05.183340 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" event={"ID":"d40fbeab-fea7-4b17-bf09-ca141f43822a","Type":"ContainerStarted","Data":"92fad6c0b79d80611c308fcd07f584a91e98cd4970d66069e85ff7f4147a6028"} Dec 02 10:20:05 crc kubenswrapper[4813]: I1202 10:20:05.183712 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:20:05 crc kubenswrapper[4813]: I1202 10:20:05.215112 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" podStartSLOduration=7.215056048 podStartE2EDuration="7.215056048s" podCreationTimestamp="2025-12-02 10:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:20:05.214601596 +0000 UTC m=+729.409775918" watchObservedRunningTime="2025-12-02 10:20:05.215056048 +0000 UTC m=+729.410230350" Dec 02 10:20:05 crc kubenswrapper[4813]: I1202 10:20:05.219960 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:20:06 crc kubenswrapper[4813]: I1202 10:20:06.189136 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:20:06 crc kubenswrapper[4813]: I1202 10:20:06.189876 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:20:06 crc kubenswrapper[4813]: I1202 10:20:06.224114 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:20:11 crc kubenswrapper[4813]: I1202 10:20:11.067391 4813 scope.go:117] "RemoveContainer" containerID="fa667f6b53370318e088cf15bfd020ef148487c511e3b82ae02d62cdb5a23253" Dec 02 10:20:12 crc kubenswrapper[4813]: I1202 10:20:12.219721 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-x7cgx_30b516bc-ab92-49fb-8f3b-431cf0ef3164/kube-multus/2.log" Dec 02 10:20:12 crc kubenswrapper[4813]: I1202 10:20:12.220050 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-x7cgx" event={"ID":"30b516bc-ab92-49fb-8f3b-431cf0ef3164","Type":"ContainerStarted","Data":"c34c7325033dbeaba64e289c2002c0e8ab3bead8896dfab487ef82cb1b2c7901"} Dec 02 10:20:25 crc kubenswrapper[4813]: I1202 10:20:25.053290 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 10:20:29 crc kubenswrapper[4813]: I1202 10:20:29.229208 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m9p6d" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.128539 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxvjf"] Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.130382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.141172 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxvjf"] Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.237805 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-catalog-content\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.237882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8c9l\" (UniqueName: \"kubernetes.io/projected/07f8f7f7-c046-4df6-83eb-9cea742dae05-kube-api-access-w8c9l\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.238034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-utilities\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.339182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8c9l\" (UniqueName: \"kubernetes.io/projected/07f8f7f7-c046-4df6-83eb-9cea742dae05-kube-api-access-w8c9l\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.339285 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-utilities\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.339357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-catalog-content\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.339977 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-catalog-content\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.340173 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-utilities\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.361240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8c9l\" (UniqueName: \"kubernetes.io/projected/07f8f7f7-c046-4df6-83eb-9cea742dae05-kube-api-access-w8c9l\") pod \"community-operators-wxvjf\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.445957 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:33 crc kubenswrapper[4813]: I1202 10:20:33.692535 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxvjf"] Dec 02 10:20:34 crc kubenswrapper[4813]: I1202 10:20:34.273393 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:20:34 crc kubenswrapper[4813]: I1202 10:20:34.273472 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:20:34 crc kubenswrapper[4813]: I1202 10:20:34.335110 4813 generic.go:334] "Generic (PLEG): container finished" podID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerID="dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28" exitCode=0 Dec 02 10:20:34 crc kubenswrapper[4813]: I1202 10:20:34.335159 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxvjf" event={"ID":"07f8f7f7-c046-4df6-83eb-9cea742dae05","Type":"ContainerDied","Data":"dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28"} Dec 02 10:20:34 crc kubenswrapper[4813]: I1202 10:20:34.335185 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxvjf" event={"ID":"07f8f7f7-c046-4df6-83eb-9cea742dae05","Type":"ContainerStarted","Data":"76255ee1fe851a94be59881279fc42202e2c072a738059bd221564605727a6b8"} Dec 02 10:20:35 crc kubenswrapper[4813]: I1202 10:20:35.342379 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxvjf" event={"ID":"07f8f7f7-c046-4df6-83eb-9cea742dae05","Type":"ContainerStarted","Data":"06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32"} Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.228161 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-67hrf"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.228415 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-67hrf" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerName="registry-server" containerID="cri-o://fb4099d238df84be70f940c947a06c2d0b1ba398009a19a464b3fd8eee4c00bd" gracePeriod=30 Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.240856 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxrcj"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.241295 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxrcj" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerName="registry-server" containerID="cri-o://86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646" gracePeriod=30 Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.245631 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxvjf"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.252696 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7fr5"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.253057 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" podUID="34294f02-e9d2-4cfa-8d54-b87a4b743eb7" containerName="marketplace-operator" containerID="cri-o://35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb" gracePeriod=30 Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.257601 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfp6c"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.258760 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfp6c" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerName="registry-server" containerID="cri-o://ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95" gracePeriod=30 Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.281339 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m6x2q"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.289932 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m6x2q" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerName="registry-server" containerID="cri-o://b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359" gracePeriod=30 Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.305917 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t8lnq"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.307982 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.316165 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t8lnq"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.351125 4813 generic.go:334] "Generic (PLEG): container finished" podID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerID="06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32" exitCode=0 Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.351352 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxvjf" event={"ID":"07f8f7f7-c046-4df6-83eb-9cea742dae05","Type":"ContainerDied","Data":"06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32"} Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.365334 4813 generic.go:334] "Generic (PLEG): container finished" podID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerID="fb4099d238df84be70f940c947a06c2d0b1ba398009a19a464b3fd8eee4c00bd" exitCode=0 Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.365370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67hrf" event={"ID":"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9","Type":"ContainerDied","Data":"fb4099d238df84be70f940c947a06c2d0b1ba398009a19a464b3fd8eee4c00bd"} Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.486981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f56a82-0f75-40b8-8bf2-97c83422abbb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.487224 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c9f56a82-0f75-40b8-8bf2-97c83422abbb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.487277 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xz5k\" (UniqueName: \"kubernetes.io/projected/c9f56a82-0f75-40b8-8bf2-97c83422abbb-kube-api-access-6xz5k\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.588422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f56a82-0f75-40b8-8bf2-97c83422abbb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.588873 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c9f56a82-0f75-40b8-8bf2-97c83422abbb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.588906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xz5k\" (UniqueName: \"kubernetes.io/projected/c9f56a82-0f75-40b8-8bf2-97c83422abbb-kube-api-access-6xz5k\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.590669 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f56a82-0f75-40b8-8bf2-97c83422abbb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.597149 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c9f56a82-0f75-40b8-8bf2-97c83422abbb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.614568 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xz5k\" (UniqueName: \"kubernetes.io/projected/c9f56a82-0f75-40b8-8bf2-97c83422abbb-kube-api-access-6xz5k\") pod \"marketplace-operator-79b997595-t8lnq\" (UID: \"c9f56a82-0f75-40b8-8bf2-97c83422abbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.700511 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.707385 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.717453 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.735240 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.748514 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.762395 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893606 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-catalog-content\") pod \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-operator-metrics\") pod \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893697 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmmfq\" (UniqueName: \"kubernetes.io/projected/d7558c9b-25d5-4d97-9b56-3955021119d7-kube-api-access-xmmfq\") pod \"d7558c9b-25d5-4d97-9b56-3955021119d7\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893758 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-catalog-content\") pod \"d7558c9b-25d5-4d97-9b56-3955021119d7\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893778 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbv2f\" (UniqueName: \"kubernetes.io/projected/9685da25-5941-4ea8-8b0d-efa884cdf2ea-kube-api-access-mbv2f\") pod \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-utilities\") pod \"12baf46c-5044-48de-ae74-b07fdb2241a1\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893835 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6mn8\" (UniqueName: \"kubernetes.io/projected/12baf46c-5044-48de-ae74-b07fdb2241a1-kube-api-access-b6mn8\") pod \"12baf46c-5044-48de-ae74-b07fdb2241a1\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893857 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-utilities\") pod \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893874 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kvlf\" (UniqueName: \"kubernetes.io/projected/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-kube-api-access-5kvlf\") pod \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-catalog-content\") pod \"12baf46c-5044-48de-ae74-b07fdb2241a1\" (UID: \"12baf46c-5044-48de-ae74-b07fdb2241a1\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lljx\" (UniqueName: \"kubernetes.io/projected/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-kube-api-access-2lljx\") pod \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893954 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-catalog-content\") pod \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\" (UID: \"9685da25-5941-4ea8-8b0d-efa884cdf2ea\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.893992 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-utilities\") pod \"d7558c9b-25d5-4d97-9b56-3955021119d7\" (UID: \"d7558c9b-25d5-4d97-9b56-3955021119d7\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.894037 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-trusted-ca\") pod \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\" (UID: \"34294f02-e9d2-4cfa-8d54-b87a4b743eb7\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.894090 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-utilities\") pod \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\" (UID: \"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9\") " Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.895032 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-utilities" (OuterVolumeSpecName: "utilities") pod "44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" (UID: "44ba0b3e-3d1f-4134-8e7e-580f9fc218a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.896901 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-utilities" (OuterVolumeSpecName: "utilities") pod "d7558c9b-25d5-4d97-9b56-3955021119d7" (UID: "d7558c9b-25d5-4d97-9b56-3955021119d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.898014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "34294f02-e9d2-4cfa-8d54-b87a4b743eb7" (UID: "34294f02-e9d2-4cfa-8d54-b87a4b743eb7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.899011 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-utilities" (OuterVolumeSpecName: "utilities") pod "12baf46c-5044-48de-ae74-b07fdb2241a1" (UID: "12baf46c-5044-48de-ae74-b07fdb2241a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.902255 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9685da25-5941-4ea8-8b0d-efa884cdf2ea-kube-api-access-mbv2f" (OuterVolumeSpecName: "kube-api-access-mbv2f") pod "9685da25-5941-4ea8-8b0d-efa884cdf2ea" (UID: "9685da25-5941-4ea8-8b0d-efa884cdf2ea"). InnerVolumeSpecName "kube-api-access-mbv2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.902283 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-utilities" (OuterVolumeSpecName: "utilities") pod "9685da25-5941-4ea8-8b0d-efa884cdf2ea" (UID: "9685da25-5941-4ea8-8b0d-efa884cdf2ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.902779 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-kube-api-access-2lljx" (OuterVolumeSpecName: "kube-api-access-2lljx") pod "44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" (UID: "44ba0b3e-3d1f-4134-8e7e-580f9fc218a9"). InnerVolumeSpecName "kube-api-access-2lljx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.903036 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7558c9b-25d5-4d97-9b56-3955021119d7-kube-api-access-xmmfq" (OuterVolumeSpecName: "kube-api-access-xmmfq") pod "d7558c9b-25d5-4d97-9b56-3955021119d7" (UID: "d7558c9b-25d5-4d97-9b56-3955021119d7"). InnerVolumeSpecName "kube-api-access-xmmfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.903162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "34294f02-e9d2-4cfa-8d54-b87a4b743eb7" (UID: "34294f02-e9d2-4cfa-8d54-b87a4b743eb7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.903454 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-kube-api-access-5kvlf" (OuterVolumeSpecName: "kube-api-access-5kvlf") pod "34294f02-e9d2-4cfa-8d54-b87a4b743eb7" (UID: "34294f02-e9d2-4cfa-8d54-b87a4b743eb7"). InnerVolumeSpecName "kube-api-access-5kvlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.904934 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12baf46c-5044-48de-ae74-b07fdb2241a1-kube-api-access-b6mn8" (OuterVolumeSpecName: "kube-api-access-b6mn8") pod "12baf46c-5044-48de-ae74-b07fdb2241a1" (UID: "12baf46c-5044-48de-ae74-b07fdb2241a1"). InnerVolumeSpecName "kube-api-access-b6mn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.931277 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7558c9b-25d5-4d97-9b56-3955021119d7" (UID: "d7558c9b-25d5-4d97-9b56-3955021119d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.931915 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t8lnq"] Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.967698 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" (UID: "44ba0b3e-3d1f-4134-8e7e-580f9fc218a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.968856 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9685da25-5941-4ea8-8b0d-efa884cdf2ea" (UID: "9685da25-5941-4ea8-8b0d-efa884cdf2ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996078 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996130 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6mn8\" (UniqueName: \"kubernetes.io/projected/12baf46c-5044-48de-ae74-b07fdb2241a1-kube-api-access-b6mn8\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996145 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996158 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kvlf\" (UniqueName: \"kubernetes.io/projected/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-kube-api-access-5kvlf\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996171 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lljx\" (UniqueName: \"kubernetes.io/projected/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-kube-api-access-2lljx\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996183 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9685da25-5941-4ea8-8b0d-efa884cdf2ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996195 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996213 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996225 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996236 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996248 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34294f02-e9d2-4cfa-8d54-b87a4b743eb7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996260 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmmfq\" (UniqueName: \"kubernetes.io/projected/d7558c9b-25d5-4d97-9b56-3955021119d7-kube-api-access-xmmfq\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996271 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7558c9b-25d5-4d97-9b56-3955021119d7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:36 crc kubenswrapper[4813]: I1202 10:20:36.996284 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbv2f\" (UniqueName: \"kubernetes.io/projected/9685da25-5941-4ea8-8b0d-efa884cdf2ea-kube-api-access-mbv2f\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.027673 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12baf46c-5044-48de-ae74-b07fdb2241a1" (UID: "12baf46c-5044-48de-ae74-b07fdb2241a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.097750 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12baf46c-5044-48de-ae74-b07fdb2241a1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.371565 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" event={"ID":"c9f56a82-0f75-40b8-8bf2-97c83422abbb","Type":"ContainerStarted","Data":"b5ff2b673bd6da9135491c91f87e293a58bebd6f5f75a11ec5059c724fd56696"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.372769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" event={"ID":"c9f56a82-0f75-40b8-8bf2-97c83422abbb","Type":"ContainerStarted","Data":"cf3e159e7dd63e22066b1f67eab1b9c3d545a82469cbee1f615bd22e74c13104"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.372845 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.373612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67hrf" event={"ID":"44ba0b3e-3d1f-4134-8e7e-580f9fc218a9","Type":"ContainerDied","Data":"af4915b5911d56298ab07de406415d50cbc88e1b86c04089c1727ce5c18de1ff"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.373672 4813 scope.go:117] "RemoveContainer" containerID="fb4099d238df84be70f940c947a06c2d0b1ba398009a19a464b3fd8eee4c00bd" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.373638 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67hrf" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.376124 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6x2q" event={"ID":"12baf46c-5044-48de-ae74-b07fdb2241a1","Type":"ContainerDied","Data":"b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.376064 4813 generic.go:334] "Generic (PLEG): container finished" podID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerID="b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359" exitCode=0 Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.376208 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6x2q" event={"ID":"12baf46c-5044-48de-ae74-b07fdb2241a1","Type":"ContainerDied","Data":"818d061e2068a3dde2039f03ab82bb966aff0039d316f9b9ced790ac463680f4"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.376259 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6x2q" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.380457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxvjf" event={"ID":"07f8f7f7-c046-4df6-83eb-9cea742dae05","Type":"ContainerStarted","Data":"c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.380544 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wxvjf" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerName="registry-server" containerID="cri-o://c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558" gracePeriod=30 Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.383392 4813 generic.go:334] "Generic (PLEG): container finished" podID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerID="86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646" exitCode=0 Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.383493 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxrcj" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.383664 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxrcj" event={"ID":"9685da25-5941-4ea8-8b0d-efa884cdf2ea","Type":"ContainerDied","Data":"86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.386994 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxrcj" event={"ID":"9685da25-5941-4ea8-8b0d-efa884cdf2ea","Type":"ContainerDied","Data":"e81c3e110f88a6456c63a23f44223bba61d2675d1c3f9e048ecbeb3d8f7e93ed"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.387118 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.390770 4813 generic.go:334] "Generic (PLEG): container finished" podID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerID="ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95" exitCode=0 Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.390919 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfp6c" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.391059 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfp6c" event={"ID":"d7558c9b-25d5-4d97-9b56-3955021119d7","Type":"ContainerDied","Data":"ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.391160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfp6c" event={"ID":"d7558c9b-25d5-4d97-9b56-3955021119d7","Type":"ContainerDied","Data":"096dbbedf1d3d146c8c1eee476ffe194c6f5497239a2924ff65935548d526a2e"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.391516 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-t8lnq" podStartSLOduration=1.391502773 podStartE2EDuration="1.391502773s" podCreationTimestamp="2025-12-02 10:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:20:37.390792814 +0000 UTC m=+761.585967116" watchObservedRunningTime="2025-12-02 10:20:37.391502773 +0000 UTC m=+761.586677085" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.396389 4813 generic.go:334] "Generic (PLEG): container finished" podID="34294f02-e9d2-4cfa-8d54-b87a4b743eb7" containerID="35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb" exitCode=0 Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.396428 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.396444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" event={"ID":"34294f02-e9d2-4cfa-8d54-b87a4b743eb7","Type":"ContainerDied","Data":"35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.396911 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7fr5" event={"ID":"34294f02-e9d2-4cfa-8d54-b87a4b743eb7","Type":"ContainerDied","Data":"8ea8c33cf97a36d5b72aab9e28ce469e97e72810e6a8c94e24d6f4f0664d8f43"} Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.404960 4813 scope.go:117] "RemoveContainer" containerID="e886201c53c74fe2ff16bcfa65f6874a54a446dd1c9fac7fabf7f4253a6a62fe" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.428528 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxvjf" podStartSLOduration=1.597636705 podStartE2EDuration="4.428509s" podCreationTimestamp="2025-12-02 10:20:33 +0000 UTC" firstStartedPulling="2025-12-02 10:20:34.336760255 +0000 UTC m=+758.531934567" lastFinishedPulling="2025-12-02 10:20:37.16763256 +0000 UTC m=+761.362806862" observedRunningTime="2025-12-02 10:20:37.424547662 +0000 UTC m=+761.619721964" watchObservedRunningTime="2025-12-02 10:20:37.428509 +0000 UTC m=+761.623683312" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.440410 4813 scope.go:117] "RemoveContainer" containerID="278032a7f458ca86de73911a6b252127f33da20924d227b260c58e07cc2bd24d" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.468566 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m6x2q"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.477752 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m6x2q"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.504573 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-67hrf"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.514338 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-67hrf"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.521501 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxrcj"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.528324 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxrcj"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.529675 4813 scope.go:117] "RemoveContainer" containerID="b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.534768 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfp6c"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.537925 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfp6c"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.547252 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7fr5"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.555728 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7fr5"] Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.557256 4813 scope.go:117] "RemoveContainer" containerID="c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.580045 4813 scope.go:117] "RemoveContainer" containerID="2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.602038 4813 scope.go:117] "RemoveContainer" containerID="b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.603744 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359\": container with ID starting with b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359 not found: ID does not exist" containerID="b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.603797 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359"} err="failed to get container status \"b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359\": rpc error: code = NotFound desc = could not find container \"b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359\": container with ID starting with b25350f39027ca4a90d9396da0a01dd589a46e8988b4d045f7d80184813d1359 not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.603832 4813 scope.go:117] "RemoveContainer" containerID="c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.604713 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b\": container with ID starting with c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b not found: ID does not exist" containerID="c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.604744 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b"} err="failed to get container status \"c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b\": rpc error: code = NotFound desc = could not find container \"c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b\": container with ID starting with c712270b450f5a751d761b96f5a431f8f95d356ef7b01473f8823e066cf1457b not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.604769 4813 scope.go:117] "RemoveContainer" containerID="2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.605501 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2\": container with ID starting with 2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2 not found: ID does not exist" containerID="2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.605525 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2"} err="failed to get container status \"2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2\": rpc error: code = NotFound desc = could not find container \"2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2\": container with ID starting with 2ef33f67146c58298682e6206370bfe6313d308530d2a962d04d84614263a8a2 not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.605545 4813 scope.go:117] "RemoveContainer" containerID="86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.623859 4813 scope.go:117] "RemoveContainer" containerID="b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.644257 4813 scope.go:117] "RemoveContainer" containerID="0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.662063 4813 scope.go:117] "RemoveContainer" containerID="86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.664118 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646\": container with ID starting with 86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646 not found: ID does not exist" containerID="86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.664152 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646"} err="failed to get container status \"86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646\": rpc error: code = NotFound desc = could not find container \"86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646\": container with ID starting with 86f72938c682cbfe8e32cb3ad97850c1ac8efb159b467eb16663cebfb8558646 not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.664175 4813 scope.go:117] "RemoveContainer" containerID="b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.664632 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071\": container with ID starting with b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071 not found: ID does not exist" containerID="b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.664656 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071"} err="failed to get container status \"b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071\": rpc error: code = NotFound desc = could not find container \"b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071\": container with ID starting with b3404bd84a765c8e218c582f232b771d4dabfd550daa5dc679a005879c5c9071 not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.664669 4813 scope.go:117] "RemoveContainer" containerID="0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.665085 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d\": container with ID starting with 0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d not found: ID does not exist" containerID="0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.665107 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d"} err="failed to get container status \"0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d\": rpc error: code = NotFound desc = could not find container \"0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d\": container with ID starting with 0b7124c51d25cf9f8b2810369fa5b0e27003adb15d6aa306d39083595196056d not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.665119 4813 scope.go:117] "RemoveContainer" containerID="ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.681416 4813 scope.go:117] "RemoveContainer" containerID="69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.697200 4813 scope.go:117] "RemoveContainer" containerID="3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.713511 4813 scope.go:117] "RemoveContainer" containerID="ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.714186 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95\": container with ID starting with ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95 not found: ID does not exist" containerID="ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.714237 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95"} err="failed to get container status \"ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95\": rpc error: code = NotFound desc = could not find container \"ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95\": container with ID starting with ae5306951eda626d66eca253be2671e7e0dbfb3b7936b28857d050685c472a95 not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.714276 4813 scope.go:117] "RemoveContainer" containerID="69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.714671 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000\": container with ID starting with 69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000 not found: ID does not exist" containerID="69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.714700 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000"} err="failed to get container status \"69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000\": rpc error: code = NotFound desc = could not find container \"69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000\": container with ID starting with 69ff43215bcbdf0c6c94769646bce23e26fa649b1246c3c924d87b08131a3000 not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.714722 4813 scope.go:117] "RemoveContainer" containerID="3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.715084 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012\": container with ID starting with 3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012 not found: ID does not exist" containerID="3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.715125 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012"} err="failed to get container status \"3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012\": rpc error: code = NotFound desc = could not find container \"3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012\": container with ID starting with 3d2492e6bae3e893e205f194a620b4ee6900ae6cfb7abaf3b52c72776e946012 not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.715145 4813 scope.go:117] "RemoveContainer" containerID="35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.732916 4813 scope.go:117] "RemoveContainer" containerID="35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb" Dec 02 10:20:37 crc kubenswrapper[4813]: E1202 10:20:37.733533 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb\": container with ID starting with 35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb not found: ID does not exist" containerID="35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.733599 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb"} err="failed to get container status \"35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb\": rpc error: code = NotFound desc = could not find container \"35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb\": container with ID starting with 35dc08eb83548e301748d9a4af9d6bcc7a55a33725382bc8fb06cb808f73b6bb not found: ID does not exist" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.746146 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxvjf_07f8f7f7-c046-4df6-83eb-9cea742dae05/registry-server/0.log" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.747319 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.907950 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8c9l\" (UniqueName: \"kubernetes.io/projected/07f8f7f7-c046-4df6-83eb-9cea742dae05-kube-api-access-w8c9l\") pod \"07f8f7f7-c046-4df6-83eb-9cea742dae05\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.908039 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-catalog-content\") pod \"07f8f7f7-c046-4df6-83eb-9cea742dae05\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.908129 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-utilities\") pod \"07f8f7f7-c046-4df6-83eb-9cea742dae05\" (UID: \"07f8f7f7-c046-4df6-83eb-9cea742dae05\") " Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.909302 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-utilities" (OuterVolumeSpecName: "utilities") pod "07f8f7f7-c046-4df6-83eb-9cea742dae05" (UID: "07f8f7f7-c046-4df6-83eb-9cea742dae05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.913974 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f8f7f7-c046-4df6-83eb-9cea742dae05-kube-api-access-w8c9l" (OuterVolumeSpecName: "kube-api-access-w8c9l") pod "07f8f7f7-c046-4df6-83eb-9cea742dae05" (UID: "07f8f7f7-c046-4df6-83eb-9cea742dae05"). InnerVolumeSpecName "kube-api-access-w8c9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:37 crc kubenswrapper[4813]: I1202 10:20:37.963912 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07f8f7f7-c046-4df6-83eb-9cea742dae05" (UID: "07f8f7f7-c046-4df6-83eb-9cea742dae05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.009456 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.009528 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8c9l\" (UniqueName: \"kubernetes.io/projected/07f8f7f7-c046-4df6-83eb-9cea742dae05-kube-api-access-w8c9l\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.009543 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f8f7f7-c046-4df6-83eb-9cea742dae05-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.075938 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" path="/var/lib/kubelet/pods/12baf46c-5044-48de-ae74-b07fdb2241a1/volumes" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.076801 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34294f02-e9d2-4cfa-8d54-b87a4b743eb7" path="/var/lib/kubelet/pods/34294f02-e9d2-4cfa-8d54-b87a4b743eb7/volumes" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.077315 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" path="/var/lib/kubelet/pods/44ba0b3e-3d1f-4134-8e7e-580f9fc218a9/volumes" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.078321 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" path="/var/lib/kubelet/pods/9685da25-5941-4ea8-8b0d-efa884cdf2ea/volumes" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.078927 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" path="/var/lib/kubelet/pods/d7558c9b-25d5-4d97-9b56-3955021119d7/volumes" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315299 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzfpv"] Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315525 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315541 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315557 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315566 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315578 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315591 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315605 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315613 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315626 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315635 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315646 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315654 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315665 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315674 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315684 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315693 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315704 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315712 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315723 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315732 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315742 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315751 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315765 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315773 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerName="extract-content" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315783 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315792 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerName="extract-utilities" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315802 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34294f02-e9d2-4cfa-8d54-b87a4b743eb7" containerName="marketplace-operator" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315810 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="34294f02-e9d2-4cfa-8d54-b87a4b743eb7" containerName="marketplace-operator" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315825 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315835 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.315845 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315854 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315962 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7558c9b-25d5-4d97-9b56-3955021119d7" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315980 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="34294f02-e9d2-4cfa-8d54-b87a4b743eb7" containerName="marketplace-operator" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.315993 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="12baf46c-5044-48de-ae74-b07fdb2241a1" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.316002 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9685da25-5941-4ea8-8b0d-efa884cdf2ea" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.316011 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.316023 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ba0b3e-3d1f-4134-8e7e-580f9fc218a9" containerName="registry-server" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.317151 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.322167 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.323572 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzfpv"] Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.406754 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wxvjf_07f8f7f7-c046-4df6-83eb-9cea742dae05/registry-server/0.log" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.407266 4813 generic.go:334] "Generic (PLEG): container finished" podID="07f8f7f7-c046-4df6-83eb-9cea742dae05" containerID="c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558" exitCode=1 Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.407318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxvjf" event={"ID":"07f8f7f7-c046-4df6-83eb-9cea742dae05","Type":"ContainerDied","Data":"c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558"} Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.407345 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxvjf" event={"ID":"07f8f7f7-c046-4df6-83eb-9cea742dae05","Type":"ContainerDied","Data":"76255ee1fe851a94be59881279fc42202e2c072a738059bd221564605727a6b8"} Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.407364 4813 scope.go:117] "RemoveContainer" containerID="c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.407471 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxvjf" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.414637 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-utilities\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.414730 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-catalog-content\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.414768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk77x\" (UniqueName: \"kubernetes.io/projected/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-kube-api-access-pk77x\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.425966 4813 scope.go:117] "RemoveContainer" containerID="06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.426708 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxvjf"] Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.430381 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wxvjf"] Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.441012 4813 scope.go:117] "RemoveContainer" containerID="dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.456563 4813 scope.go:117] "RemoveContainer" containerID="c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.457009 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558\": container with ID starting with c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558 not found: ID does not exist" containerID="c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.457053 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558"} err="failed to get container status \"c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558\": rpc error: code = NotFound desc = could not find container \"c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558\": container with ID starting with c2285a282fa74022af21fd30d6db342074777059aba13824d781af22c6f8a558 not found: ID does not exist" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.457106 4813 scope.go:117] "RemoveContainer" containerID="06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.457415 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32\": container with ID starting with 06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32 not found: ID does not exist" containerID="06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.457465 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32"} err="failed to get container status \"06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32\": rpc error: code = NotFound desc = could not find container \"06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32\": container with ID starting with 06658414bcb92a172da856a25c15ba7e6d06da90dede3489b7aa4637dba0df32 not found: ID does not exist" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.457491 4813 scope.go:117] "RemoveContainer" containerID="dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28" Dec 02 10:20:38 crc kubenswrapper[4813]: E1202 10:20:38.457864 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28\": container with ID starting with dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28 not found: ID does not exist" containerID="dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.457921 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28"} err="failed to get container status \"dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28\": rpc error: code = NotFound desc = could not find container \"dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28\": container with ID starting with dec25a3bbfe3d1acc665216edf4e14878f8c1deb93b3968003b8f1e0b825ec28 not found: ID does not exist" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.516352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk77x\" (UniqueName: \"kubernetes.io/projected/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-kube-api-access-pk77x\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.516452 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-utilities\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.516775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-catalog-content\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.517242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-utilities\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.517305 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-catalog-content\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.532657 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk77x\" (UniqueName: \"kubernetes.io/projected/72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377-kube-api-access-pk77x\") pod \"certified-operators-mzfpv\" (UID: \"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377\") " pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.631831 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.717084 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqw25"] Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.721180 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.728011 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqw25"] Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.809197 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzfpv"] Dec 02 10:20:38 crc kubenswrapper[4813]: W1202 10:20:38.813398 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72ac4e4b_7f95_4caa_8a51_8ecc3eaa5377.slice/crio-06b0a160faa6a9561caed839843785ce5bae64c408f43cc3877b49562d0590da WatchSource:0}: Error finding container 06b0a160faa6a9561caed839843785ce5bae64c408f43cc3877b49562d0590da: Status 404 returned error can't find the container with id 06b0a160faa6a9561caed839843785ce5bae64c408f43cc3877b49562d0590da Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.819954 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-catalog-content\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.819995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4cw5\" (UniqueName: \"kubernetes.io/projected/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-kube-api-access-l4cw5\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.820066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-utilities\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.914132 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pl2xf"] Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.915051 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.917022 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.921540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-utilities\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.921590 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-catalog-content\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.921634 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-catalog-content\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.921653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4cw5\" (UniqueName: \"kubernetes.io/projected/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-kube-api-access-l4cw5\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.921680 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-utilities\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.921700 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnksz\" (UniqueName: \"kubernetes.io/projected/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-kube-api-access-rnksz\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.922544 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-utilities\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.922830 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-catalog-content\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.924496 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pl2xf"] Dec 02 10:20:38 crc kubenswrapper[4813]: I1202 10:20:38.943321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4cw5\" (UniqueName: \"kubernetes.io/projected/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-kube-api-access-l4cw5\") pod \"certified-operators-jqw25\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.023102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-catalog-content\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.023718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-utilities\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.023753 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnksz\" (UniqueName: \"kubernetes.io/projected/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-kube-api-access-rnksz\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.024008 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-catalog-content\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.024121 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-utilities\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.039271 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.044066 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnksz\" (UniqueName: \"kubernetes.io/projected/28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45-kube-api-access-rnksz\") pod \"redhat-marketplace-pl2xf\" (UID: \"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45\") " pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.220001 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqw25"] Dec 02 10:20:39 crc kubenswrapper[4813]: W1202 10:20:39.222260 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcc96e8d_3067_40f7_b55a_f4d5c83526fc.slice/crio-22499c626611581ee549b7c2fa7a66baf7ecb413d76d00721b0f3d5f37dc2e8b WatchSource:0}: Error finding container 22499c626611581ee549b7c2fa7a66baf7ecb413d76d00721b0f3d5f37dc2e8b: Status 404 returned error can't find the container with id 22499c626611581ee549b7c2fa7a66baf7ecb413d76d00721b0f3d5f37dc2e8b Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.235253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.315487 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-68s6h"] Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.317803 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.326293 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-catalog-content\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.326351 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzbzg\" (UniqueName: \"kubernetes.io/projected/4b5e6293-087f-499a-9d48-2887110be409-kube-api-access-kzbzg\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.326396 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-utilities\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.330045 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-68s6h"] Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.420377 4813 generic.go:334] "Generic (PLEG): container finished" podID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerID="537115f2313295aa3a3adf9312a336c2a20c50c46bcf7a58d89ce056a1a642bc" exitCode=0 Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.420457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqw25" event={"ID":"bcc96e8d-3067-40f7-b55a-f4d5c83526fc","Type":"ContainerDied","Data":"537115f2313295aa3a3adf9312a336c2a20c50c46bcf7a58d89ce056a1a642bc"} Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.420487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqw25" event={"ID":"bcc96e8d-3067-40f7-b55a-f4d5c83526fc","Type":"ContainerStarted","Data":"22499c626611581ee549b7c2fa7a66baf7ecb413d76d00721b0f3d5f37dc2e8b"} Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.423342 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzfpv" event={"ID":"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377","Type":"ContainerDied","Data":"0881c71ce6745e8e7706aff8e7d603f604b7129611cb8eb1c9f6ba0a399140b7"} Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.423298 4813 generic.go:334] "Generic (PLEG): container finished" podID="72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377" containerID="0881c71ce6745e8e7706aff8e7d603f604b7129611cb8eb1c9f6ba0a399140b7" exitCode=0 Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.423897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzfpv" event={"ID":"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377","Type":"ContainerStarted","Data":"06b0a160faa6a9561caed839843785ce5bae64c408f43cc3877b49562d0590da"} Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.427057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzbzg\" (UniqueName: \"kubernetes.io/projected/4b5e6293-087f-499a-9d48-2887110be409-kube-api-access-kzbzg\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.427151 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-utilities\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.427198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-catalog-content\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.428001 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-catalog-content\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.428098 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-utilities\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.438916 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pl2xf"] Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.451384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzbzg\" (UniqueName: \"kubernetes.io/projected/4b5e6293-087f-499a-9d48-2887110be409-kube-api-access-kzbzg\") pod \"redhat-marketplace-68s6h\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: W1202 10:20:39.454112 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28fcfbf0_52fc_4dd2_92ce_da5e6a3a5b45.slice/crio-ff67984c09b5ca6b9deaec1dc2f2d5cc0b84caa1a13fa06656c28c2cee82531f WatchSource:0}: Error finding container ff67984c09b5ca6b9deaec1dc2f2d5cc0b84caa1a13fa06656c28c2cee82531f: Status 404 returned error can't find the container with id ff67984c09b5ca6b9deaec1dc2f2d5cc0b84caa1a13fa06656c28c2cee82531f Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.670361 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:39 crc kubenswrapper[4813]: I1202 10:20:39.854026 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-68s6h"] Dec 02 10:20:39 crc kubenswrapper[4813]: W1202 10:20:39.861456 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b5e6293_087f_499a_9d48_2887110be409.slice/crio-e946da16cc794a413bce816d290cd22e10a4fe7db53f8e3aeb97af5ecda03680 WatchSource:0}: Error finding container e946da16cc794a413bce816d290cd22e10a4fe7db53f8e3aeb97af5ecda03680: Status 404 returned error can't find the container with id e946da16cc794a413bce816d290cd22e10a4fe7db53f8e3aeb97af5ecda03680 Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.075137 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f8f7f7-c046-4df6-83eb-9cea742dae05" path="/var/lib/kubelet/pods/07f8f7f7-c046-4df6-83eb-9cea742dae05/volumes" Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.433512 4813 generic.go:334] "Generic (PLEG): container finished" podID="28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45" containerID="fd844883e1419211851d2c4c47954c89f4d47ebb2a785daa99b35773e34931f6" exitCode=0 Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.434127 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl2xf" event={"ID":"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45","Type":"ContainerDied","Data":"fd844883e1419211851d2c4c47954c89f4d47ebb2a785daa99b35773e34931f6"} Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.434242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl2xf" event={"ID":"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45","Type":"ContainerStarted","Data":"ff67984c09b5ca6b9deaec1dc2f2d5cc0b84caa1a13fa06656c28c2cee82531f"} Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.437391 4813 generic.go:334] "Generic (PLEG): container finished" podID="4b5e6293-087f-499a-9d48-2887110be409" containerID="0d1321b041f7df52fcc2449ea872a132ce88e604859d7d0305d110ce38aee988" exitCode=0 Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.437448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68s6h" event={"ID":"4b5e6293-087f-499a-9d48-2887110be409","Type":"ContainerDied","Data":"0d1321b041f7df52fcc2449ea872a132ce88e604859d7d0305d110ce38aee988"} Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.437517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68s6h" event={"ID":"4b5e6293-087f-499a-9d48-2887110be409","Type":"ContainerStarted","Data":"e946da16cc794a413bce816d290cd22e10a4fe7db53f8e3aeb97af5ecda03680"} Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.441656 4813 generic.go:334] "Generic (PLEG): container finished" podID="72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377" containerID="86d9ab76591c24e18470edb9bedadd6d6a575f771e149900a94bc509a649a49b" exitCode=0 Dec 02 10:20:40 crc kubenswrapper[4813]: I1202 10:20:40.441707 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzfpv" event={"ID":"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377","Type":"ContainerDied","Data":"86d9ab76591c24e18470edb9bedadd6d6a575f771e149900a94bc509a649a49b"} Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.119275 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kjt9x"] Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.121131 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.124321 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.128630 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjt9x"] Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.248146 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015756fa-cf90-4fa0-a1a4-075fa8b36171-utilities\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.248603 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxq5b\" (UniqueName: \"kubernetes.io/projected/015756fa-cf90-4fa0-a1a4-075fa8b36171-kube-api-access-sxq5b\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.248749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015756fa-cf90-4fa0-a1a4-075fa8b36171-catalog-content\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.350026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxq5b\" (UniqueName: \"kubernetes.io/projected/015756fa-cf90-4fa0-a1a4-075fa8b36171-kube-api-access-sxq5b\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.350110 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015756fa-cf90-4fa0-a1a4-075fa8b36171-catalog-content\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.350236 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015756fa-cf90-4fa0-a1a4-075fa8b36171-utilities\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.350831 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015756fa-cf90-4fa0-a1a4-075fa8b36171-catalog-content\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.350967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015756fa-cf90-4fa0-a1a4-075fa8b36171-utilities\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.373048 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxq5b\" (UniqueName: \"kubernetes.io/projected/015756fa-cf90-4fa0-a1a4-075fa8b36171-kube-api-access-sxq5b\") pod \"redhat-operators-kjt9x\" (UID: \"015756fa-cf90-4fa0-a1a4-075fa8b36171\") " pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.451601 4813 generic.go:334] "Generic (PLEG): container finished" podID="4b5e6293-087f-499a-9d48-2887110be409" containerID="905508a80489a80bb8cb2fb5c84d838422ad673652f939d715fb01632da7258a" exitCode=0 Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.451687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68s6h" event={"ID":"4b5e6293-087f-499a-9d48-2887110be409","Type":"ContainerDied","Data":"905508a80489a80bb8cb2fb5c84d838422ad673652f939d715fb01632da7258a"} Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.455138 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzfpv" event={"ID":"72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377","Type":"ContainerStarted","Data":"23c6011793a8b707f8d9540aebb7e43aec74bc1328b62dae56563a797e0ba00a"} Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.458816 4813 generic.go:334] "Generic (PLEG): container finished" podID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerID="c50b1ea77feb0775d48290478ad176586c7f2efd7c96941af7b03fdf2b915412" exitCode=0 Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.458860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqw25" event={"ID":"bcc96e8d-3067-40f7-b55a-f4d5c83526fc","Type":"ContainerDied","Data":"c50b1ea77feb0775d48290478ad176586c7f2efd7c96941af7b03fdf2b915412"} Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.465106 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.519588 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2q2wq"] Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.520810 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.548952 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q2wq"] Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.551128 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzfpv" podStartSLOduration=1.9779842730000001 podStartE2EDuration="3.551097479s" podCreationTimestamp="2025-12-02 10:20:38 +0000 UTC" firstStartedPulling="2025-12-02 10:20:39.424895971 +0000 UTC m=+763.620070273" lastFinishedPulling="2025-12-02 10:20:40.998009177 +0000 UTC m=+765.193183479" observedRunningTime="2025-12-02 10:20:41.542936745 +0000 UTC m=+765.738111047" watchObservedRunningTime="2025-12-02 10:20:41.551097479 +0000 UTC m=+765.746271791" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.553743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w22n\" (UniqueName: \"kubernetes.io/projected/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-kube-api-access-7w22n\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.553811 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-utilities\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.553829 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-catalog-content\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.655820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w22n\" (UniqueName: \"kubernetes.io/projected/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-kube-api-access-7w22n\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.656356 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-utilities\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.656382 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-catalog-content\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.657486 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-catalog-content\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.657746 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-utilities\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.682292 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w22n\" (UniqueName: \"kubernetes.io/projected/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-kube-api-access-7w22n\") pod \"redhat-operators-2q2wq\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.723273 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bjh7d"] Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.724633 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.729523 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.743457 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bjh7d"] Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.750778 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjt9x"] Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.757952 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-utilities\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.758064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-catalog-content\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.758176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4drln\" (UniqueName: \"kubernetes.io/projected/b3628f43-b85b-4527-831c-fcb40e1755cd-kube-api-access-4drln\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: W1202 10:20:41.765784 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod015756fa_cf90_4fa0_a1a4_075fa8b36171.slice/crio-e3bc5905bf108328122ebcfbef8a6115da1c1b86e5bb64d66441febb48d22ce5 WatchSource:0}: Error finding container e3bc5905bf108328122ebcfbef8a6115da1c1b86e5bb64d66441febb48d22ce5: Status 404 returned error can't find the container with id e3bc5905bf108328122ebcfbef8a6115da1c1b86e5bb64d66441febb48d22ce5 Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.855429 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.859411 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-catalog-content\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.859496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4drln\" (UniqueName: \"kubernetes.io/projected/b3628f43-b85b-4527-831c-fcb40e1755cd-kube-api-access-4drln\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.859554 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-utilities\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.860554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-utilities\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.860686 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-catalog-content\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:41 crc kubenswrapper[4813]: I1202 10:20:41.890060 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4drln\" (UniqueName: \"kubernetes.io/projected/b3628f43-b85b-4527-831c-fcb40e1755cd-kube-api-access-4drln\") pod \"community-operators-bjh7d\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.096538 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.120218 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q2wq"] Dec 02 10:20:42 crc kubenswrapper[4813]: W1202 10:20:42.132432 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaca3ab13_17f2_4ca5_bf1c_7d58c4824e3d.slice/crio-a2279dc329cb30bec481a80dcfd90c82edda59182f521e4a71d4fb6586eb6e7c WatchSource:0}: Error finding container a2279dc329cb30bec481a80dcfd90c82edda59182f521e4a71d4fb6586eb6e7c: Status 404 returned error can't find the container with id a2279dc329cb30bec481a80dcfd90c82edda59182f521e4a71d4fb6586eb6e7c Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.348785 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bjh7d"] Dec 02 10:20:42 crc kubenswrapper[4813]: W1202 10:20:42.354735 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3628f43_b85b_4527_831c_fcb40e1755cd.slice/crio-3bab5811c8d9988a7d83a57db38a3dab98f8a9d7c9b1863ddeb20b3a7a5ef94e WatchSource:0}: Error finding container 3bab5811c8d9988a7d83a57db38a3dab98f8a9d7c9b1863ddeb20b3a7a5ef94e: Status 404 returned error can't find the container with id 3bab5811c8d9988a7d83a57db38a3dab98f8a9d7c9b1863ddeb20b3a7a5ef94e Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.466833 4813 generic.go:334] "Generic (PLEG): container finished" podID="015756fa-cf90-4fa0-a1a4-075fa8b36171" containerID="78df5f081e190f6df9d04266ef4fca8b248721afe2d4bbe787a2ed83bd87a34a" exitCode=0 Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.466986 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt9x" event={"ID":"015756fa-cf90-4fa0-a1a4-075fa8b36171","Type":"ContainerDied","Data":"78df5f081e190f6df9d04266ef4fca8b248721afe2d4bbe787a2ed83bd87a34a"} Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.467505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt9x" event={"ID":"015756fa-cf90-4fa0-a1a4-075fa8b36171","Type":"ContainerStarted","Data":"e3bc5905bf108328122ebcfbef8a6115da1c1b86e5bb64d66441febb48d22ce5"} Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.471087 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqw25" event={"ID":"bcc96e8d-3067-40f7-b55a-f4d5c83526fc","Type":"ContainerStarted","Data":"22f4a0e2534c8b3b7512bec69b1f1c2422b13d618f8e2c3bc8f919a5e1dcd99d"} Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.474300 4813 generic.go:334] "Generic (PLEG): container finished" podID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerID="38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2" exitCode=0 Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.474386 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q2wq" event={"ID":"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d","Type":"ContainerDied","Data":"38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2"} Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.474446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q2wq" event={"ID":"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d","Type":"ContainerStarted","Data":"a2279dc329cb30bec481a80dcfd90c82edda59182f521e4a71d4fb6586eb6e7c"} Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.476810 4813 generic.go:334] "Generic (PLEG): container finished" podID="28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45" containerID="564759b9bfb0b3394077a46c5f7f3136f3e53ccfdfa8ff11351fafb7f7565a85" exitCode=0 Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.476990 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl2xf" event={"ID":"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45","Type":"ContainerDied","Data":"564759b9bfb0b3394077a46c5f7f3136f3e53ccfdfa8ff11351fafb7f7565a85"} Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.484805 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68s6h" event={"ID":"4b5e6293-087f-499a-9d48-2887110be409","Type":"ContainerStarted","Data":"4cc0d0c9b045e1c4a3146eec2980ada54d528d686d3257fc5b71e11cd9a725fb"} Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.486335 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjh7d" event={"ID":"b3628f43-b85b-4527-831c-fcb40e1755cd","Type":"ContainerStarted","Data":"3bab5811c8d9988a7d83a57db38a3dab98f8a9d7c9b1863ddeb20b3a7a5ef94e"} Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.536792 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqw25" podStartSLOduration=1.9454343779999999 podStartE2EDuration="4.536775071s" podCreationTimestamp="2025-12-02 10:20:38 +0000 UTC" firstStartedPulling="2025-12-02 10:20:39.422034692 +0000 UTC m=+763.617208994" lastFinishedPulling="2025-12-02 10:20:42.013375385 +0000 UTC m=+766.208549687" observedRunningTime="2025-12-02 10:20:42.536468212 +0000 UTC m=+766.731642514" watchObservedRunningTime="2025-12-02 10:20:42.536775071 +0000 UTC m=+766.731949373" Dec 02 10:20:42 crc kubenswrapper[4813]: I1202 10:20:42.587865 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-68s6h" podStartSLOduration=2.093662926 podStartE2EDuration="3.587839004s" podCreationTimestamp="2025-12-02 10:20:39 +0000 UTC" firstStartedPulling="2025-12-02 10:20:40.444862454 +0000 UTC m=+764.640036756" lastFinishedPulling="2025-12-02 10:20:41.939038542 +0000 UTC m=+766.134212834" observedRunningTime="2025-12-02 10:20:42.581984573 +0000 UTC m=+766.777158885" watchObservedRunningTime="2025-12-02 10:20:42.587839004 +0000 UTC m=+766.783013306" Dec 02 10:20:43 crc kubenswrapper[4813]: I1202 10:20:43.497783 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerID="6f9734bc6e997a93205b4c087421e337b831cf5e9eee7e22c7c57e853e66a70c" exitCode=0 Dec 02 10:20:43 crc kubenswrapper[4813]: I1202 10:20:43.498270 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjh7d" event={"ID":"b3628f43-b85b-4527-831c-fcb40e1755cd","Type":"ContainerDied","Data":"6f9734bc6e997a93205b4c087421e337b831cf5e9eee7e22c7c57e853e66a70c"} Dec 02 10:20:44 crc kubenswrapper[4813]: I1202 10:20:44.522594 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt9x" event={"ID":"015756fa-cf90-4fa0-a1a4-075fa8b36171","Type":"ContainerStarted","Data":"0658be1a39f00a7a6aac209a05df5cc6b276e9a4933efe5227da61292be5dadd"} Dec 02 10:20:44 crc kubenswrapper[4813]: I1202 10:20:44.527456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q2wq" event={"ID":"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d","Type":"ContainerStarted","Data":"cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1"} Dec 02 10:20:44 crc kubenswrapper[4813]: I1202 10:20:44.530477 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pl2xf" event={"ID":"28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45","Type":"ContainerStarted","Data":"dcf4da46c1add7adf61aac82e2109d659deaf5729efdb95ff6366168520d8064"} Dec 02 10:20:44 crc kubenswrapper[4813]: I1202 10:20:44.743781 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pl2xf" podStartSLOduration=3.751442365 podStartE2EDuration="6.743756049s" podCreationTimestamp="2025-12-02 10:20:38 +0000 UTC" firstStartedPulling="2025-12-02 10:20:40.435053354 +0000 UTC m=+764.630227656" lastFinishedPulling="2025-12-02 10:20:43.427367018 +0000 UTC m=+767.622541340" observedRunningTime="2025-12-02 10:20:44.742465874 +0000 UTC m=+768.937640176" watchObservedRunningTime="2025-12-02 10:20:44.743756049 +0000 UTC m=+768.938930351" Dec 02 10:20:45 crc kubenswrapper[4813]: I1202 10:20:45.757252 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjh7d" event={"ID":"b3628f43-b85b-4527-831c-fcb40e1755cd","Type":"ContainerStarted","Data":"095437fd8698efc75b2af74d8f89130e5c76a3a91ace88aa265511324e87dd64"} Dec 02 10:20:47 crc kubenswrapper[4813]: I1202 10:20:47.854984 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerID="095437fd8698efc75b2af74d8f89130e5c76a3a91ace88aa265511324e87dd64" exitCode=0 Dec 02 10:20:47 crc kubenswrapper[4813]: I1202 10:20:47.855107 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjh7d" event={"ID":"b3628f43-b85b-4527-831c-fcb40e1755cd","Type":"ContainerDied","Data":"095437fd8698efc75b2af74d8f89130e5c76a3a91ace88aa265511324e87dd64"} Dec 02 10:20:47 crc kubenswrapper[4813]: I1202 10:20:47.860020 4813 generic.go:334] "Generic (PLEG): container finished" podID="015756fa-cf90-4fa0-a1a4-075fa8b36171" containerID="0658be1a39f00a7a6aac209a05df5cc6b276e9a4933efe5227da61292be5dadd" exitCode=0 Dec 02 10:20:47 crc kubenswrapper[4813]: I1202 10:20:47.860112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt9x" event={"ID":"015756fa-cf90-4fa0-a1a4-075fa8b36171","Type":"ContainerDied","Data":"0658be1a39f00a7a6aac209a05df5cc6b276e9a4933efe5227da61292be5dadd"} Dec 02 10:20:47 crc kubenswrapper[4813]: I1202 10:20:47.863162 4813 generic.go:334] "Generic (PLEG): container finished" podID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerID="cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1" exitCode=0 Dec 02 10:20:47 crc kubenswrapper[4813]: I1202 10:20:47.863204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q2wq" event={"ID":"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d","Type":"ContainerDied","Data":"cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1"} Dec 02 10:20:48 crc kubenswrapper[4813]: I1202 10:20:48.632511 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:48 crc kubenswrapper[4813]: I1202 10:20:48.633121 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:48 crc kubenswrapper[4813]: I1202 10:20:48.673616 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:48 crc kubenswrapper[4813]: I1202 10:20:48.918931 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzfpv" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.039454 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.039649 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.081248 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.235595 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.235636 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.308450 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.670726 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.670773 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.733508 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.875417 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjh7d" event={"ID":"b3628f43-b85b-4527-831c-fcb40e1755cd","Type":"ContainerStarted","Data":"ca2e3d8bc7ad8e123b7784e80170dac44ff8850b0296fbdc254b0004c1ca6f4b"} Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.877520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjt9x" event={"ID":"015756fa-cf90-4fa0-a1a4-075fa8b36171","Type":"ContainerStarted","Data":"44c32329a4745cef0f00230f23ca56e378a5b6b1d11300b9599a40b036f3e8f9"} Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.881170 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q2wq" event={"ID":"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d","Type":"ContainerStarted","Data":"14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4"} Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.894543 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bjh7d" podStartSLOduration=3.650392853 podStartE2EDuration="8.894518678s" podCreationTimestamp="2025-12-02 10:20:41 +0000 UTC" firstStartedPulling="2025-12-02 10:20:43.505129955 +0000 UTC m=+767.700304257" lastFinishedPulling="2025-12-02 10:20:48.74925578 +0000 UTC m=+772.944430082" observedRunningTime="2025-12-02 10:20:49.892617465 +0000 UTC m=+774.087791767" watchObservedRunningTime="2025-12-02 10:20:49.894518678 +0000 UTC m=+774.089692980" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.913264 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kjt9x" podStartSLOduration=2.4811124749999998 podStartE2EDuration="8.913245312s" podCreationTimestamp="2025-12-02 10:20:41 +0000 UTC" firstStartedPulling="2025-12-02 10:20:42.469053689 +0000 UTC m=+766.664227991" lastFinishedPulling="2025-12-02 10:20:48.901186526 +0000 UTC m=+773.096360828" observedRunningTime="2025-12-02 10:20:49.912224004 +0000 UTC m=+774.107398306" watchObservedRunningTime="2025-12-02 10:20:49.913245312 +0000 UTC m=+774.108419614" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.934739 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2q2wq" podStartSLOduration=2.167052024 podStartE2EDuration="8.934711302s" podCreationTimestamp="2025-12-02 10:20:41 +0000 UTC" firstStartedPulling="2025-12-02 10:20:42.475722153 +0000 UTC m=+766.670896455" lastFinishedPulling="2025-12-02 10:20:49.243381431 +0000 UTC m=+773.438555733" observedRunningTime="2025-12-02 10:20:49.930716982 +0000 UTC m=+774.125891294" watchObservedRunningTime="2025-12-02 10:20:49.934711302 +0000 UTC m=+774.129885604" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.994107 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:49 crc kubenswrapper[4813]: I1202 10:20:49.994185 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:50 crc kubenswrapper[4813]: I1202 10:20:50.028087 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pl2xf" Dec 02 10:20:51 crc kubenswrapper[4813]: I1202 10:20:51.467208 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:51 crc kubenswrapper[4813]: I1202 10:20:51.467554 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:20:51 crc kubenswrapper[4813]: I1202 10:20:51.856155 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:51 crc kubenswrapper[4813]: I1202 10:20:51.856474 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:20:51 crc kubenswrapper[4813]: I1202 10:20:51.907758 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqw25"] Dec 02 10:20:52 crc kubenswrapper[4813]: I1202 10:20:52.097018 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:52 crc kubenswrapper[4813]: I1202 10:20:52.097126 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:52 crc kubenswrapper[4813]: I1202 10:20:52.106445 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-68s6h"] Dec 02 10:20:52 crc kubenswrapper[4813]: I1202 10:20:52.106698 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-68s6h" podUID="4b5e6293-087f-499a-9d48-2887110be409" containerName="registry-server" containerID="cri-o://4cc0d0c9b045e1c4a3146eec2980ada54d528d686d3257fc5b71e11cd9a725fb" gracePeriod=2 Dec 02 10:20:52 crc kubenswrapper[4813]: I1202 10:20:52.138490 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:20:52 crc kubenswrapper[4813]: I1202 10:20:52.702249 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kjt9x" podUID="015756fa-cf90-4fa0-a1a4-075fa8b36171" containerName="registry-server" probeResult="failure" output=< Dec 02 10:20:52 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 10:20:52 crc kubenswrapper[4813]: > Dec 02 10:20:52 crc kubenswrapper[4813]: I1202 10:20:52.898439 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jqw25" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerName="registry-server" containerID="cri-o://22f4a0e2534c8b3b7512bec69b1f1c2422b13d618f8e2c3bc8f919a5e1dcd99d" gracePeriod=2 Dec 02 10:20:52 crc kubenswrapper[4813]: I1202 10:20:52.906418 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2q2wq" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="registry-server" probeResult="failure" output=< Dec 02 10:20:52 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 10:20:52 crc kubenswrapper[4813]: > Dec 02 10:20:56 crc kubenswrapper[4813]: I1202 10:20:56.925092 4813 generic.go:334] "Generic (PLEG): container finished" podID="4b5e6293-087f-499a-9d48-2887110be409" containerID="4cc0d0c9b045e1c4a3146eec2980ada54d528d686d3257fc5b71e11cd9a725fb" exitCode=0 Dec 02 10:20:56 crc kubenswrapper[4813]: I1202 10:20:56.925122 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68s6h" event={"ID":"4b5e6293-087f-499a-9d48-2887110be409","Type":"ContainerDied","Data":"4cc0d0c9b045e1c4a3146eec2980ada54d528d686d3257fc5b71e11cd9a725fb"} Dec 02 10:20:56 crc kubenswrapper[4813]: I1202 10:20:56.928791 4813 generic.go:334] "Generic (PLEG): container finished" podID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerID="22f4a0e2534c8b3b7512bec69b1f1c2422b13d618f8e2c3bc8f919a5e1dcd99d" exitCode=0 Dec 02 10:20:56 crc kubenswrapper[4813]: I1202 10:20:56.928844 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqw25" event={"ID":"bcc96e8d-3067-40f7-b55a-f4d5c83526fc","Type":"ContainerDied","Data":"22f4a0e2534c8b3b7512bec69b1f1c2422b13d618f8e2c3bc8f919a5e1dcd99d"} Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.248241 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.253705 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.288325 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-utilities\") pod \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.288382 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4cw5\" (UniqueName: \"kubernetes.io/projected/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-kube-api-access-l4cw5\") pod \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.288432 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-catalog-content\") pod \"4b5e6293-087f-499a-9d48-2887110be409\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.288455 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-utilities\") pod \"4b5e6293-087f-499a-9d48-2887110be409\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.288479 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-catalog-content\") pod \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\" (UID: \"bcc96e8d-3067-40f7-b55a-f4d5c83526fc\") " Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.288516 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzbzg\" (UniqueName: \"kubernetes.io/projected/4b5e6293-087f-499a-9d48-2887110be409-kube-api-access-kzbzg\") pod \"4b5e6293-087f-499a-9d48-2887110be409\" (UID: \"4b5e6293-087f-499a-9d48-2887110be409\") " Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.289196 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-utilities" (OuterVolumeSpecName: "utilities") pod "bcc96e8d-3067-40f7-b55a-f4d5c83526fc" (UID: "bcc96e8d-3067-40f7-b55a-f4d5c83526fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.295988 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5e6293-087f-499a-9d48-2887110be409-kube-api-access-kzbzg" (OuterVolumeSpecName: "kube-api-access-kzbzg") pod "4b5e6293-087f-499a-9d48-2887110be409" (UID: "4b5e6293-087f-499a-9d48-2887110be409"). InnerVolumeSpecName "kube-api-access-kzbzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.296046 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-kube-api-access-l4cw5" (OuterVolumeSpecName: "kube-api-access-l4cw5") pod "bcc96e8d-3067-40f7-b55a-f4d5c83526fc" (UID: "bcc96e8d-3067-40f7-b55a-f4d5c83526fc"). InnerVolumeSpecName "kube-api-access-l4cw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.308823 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-utilities" (OuterVolumeSpecName: "utilities") pod "4b5e6293-087f-499a-9d48-2887110be409" (UID: "4b5e6293-087f-499a-9d48-2887110be409"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.346367 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcc96e8d-3067-40f7-b55a-f4d5c83526fc" (UID: "bcc96e8d-3067-40f7-b55a-f4d5c83526fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.389844 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.389896 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4cw5\" (UniqueName: \"kubernetes.io/projected/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-kube-api-access-l4cw5\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.389918 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.389937 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc96e8d-3067-40f7-b55a-f4d5c83526fc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.389957 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzbzg\" (UniqueName: \"kubernetes.io/projected/4b5e6293-087f-499a-9d48-2887110be409-kube-api-access-kzbzg\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.943750 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68s6h" event={"ID":"4b5e6293-087f-499a-9d48-2887110be409","Type":"ContainerDied","Data":"e946da16cc794a413bce816d290cd22e10a4fe7db53f8e3aeb97af5ecda03680"} Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.943771 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68s6h" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.943799 4813 scope.go:117] "RemoveContainer" containerID="4cc0d0c9b045e1c4a3146eec2980ada54d528d686d3257fc5b71e11cd9a725fb" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.948633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqw25" event={"ID":"bcc96e8d-3067-40f7-b55a-f4d5c83526fc","Type":"ContainerDied","Data":"22499c626611581ee549b7c2fa7a66baf7ecb413d76d00721b0f3d5f37dc2e8b"} Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.948757 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqw25" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.960794 4813 scope.go:117] "RemoveContainer" containerID="905508a80489a80bb8cb2fb5c84d838422ad673652f939d715fb01632da7258a" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.979725 4813 scope.go:117] "RemoveContainer" containerID="0d1321b041f7df52fcc2449ea872a132ce88e604859d7d0305d110ce38aee988" Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.989212 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqw25"] Dec 02 10:20:58 crc kubenswrapper[4813]: I1202 10:20:58.994775 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jqw25"] Dec 02 10:20:59 crc kubenswrapper[4813]: I1202 10:20:59.004529 4813 scope.go:117] "RemoveContainer" containerID="22f4a0e2534c8b3b7512bec69b1f1c2422b13d618f8e2c3bc8f919a5e1dcd99d" Dec 02 10:20:59 crc kubenswrapper[4813]: I1202 10:20:59.023881 4813 scope.go:117] "RemoveContainer" containerID="c50b1ea77feb0775d48290478ad176586c7f2efd7c96941af7b03fdf2b915412" Dec 02 10:20:59 crc kubenswrapper[4813]: I1202 10:20:59.041216 4813 scope.go:117] "RemoveContainer" containerID="537115f2313295aa3a3adf9312a336c2a20c50c46bcf7a58d89ce056a1a642bc" Dec 02 10:20:59 crc kubenswrapper[4813]: I1202 10:20:59.088472 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b5e6293-087f-499a-9d48-2887110be409" (UID: "4b5e6293-087f-499a-9d48-2887110be409"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:20:59 crc kubenswrapper[4813]: I1202 10:20:59.098426 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b5e6293-087f-499a-9d48-2887110be409-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:20:59 crc kubenswrapper[4813]: I1202 10:20:59.274704 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-68s6h"] Dec 02 10:20:59 crc kubenswrapper[4813]: I1202 10:20:59.279312 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-68s6h"] Dec 02 10:21:00 crc kubenswrapper[4813]: I1202 10:21:00.073473 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5e6293-087f-499a-9d48-2887110be409" path="/var/lib/kubelet/pods/4b5e6293-087f-499a-9d48-2887110be409/volumes" Dec 02 10:21:00 crc kubenswrapper[4813]: I1202 10:21:00.074635 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" path="/var/lib/kubelet/pods/bcc96e8d-3067-40f7-b55a-f4d5c83526fc/volumes" Dec 02 10:21:01 crc kubenswrapper[4813]: I1202 10:21:01.508185 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:21:01 crc kubenswrapper[4813]: I1202 10:21:01.544235 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kjt9x" Dec 02 10:21:01 crc kubenswrapper[4813]: I1202 10:21:01.904757 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:21:01 crc kubenswrapper[4813]: I1202 10:21:01.947207 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:21:02 crc kubenswrapper[4813]: I1202 10:21:02.134540 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:21:02 crc kubenswrapper[4813]: I1202 10:21:02.709836 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2q2wq"] Dec 02 10:21:02 crc kubenswrapper[4813]: I1202 10:21:02.979639 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2q2wq" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="registry-server" containerID="cri-o://14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4" gracePeriod=2 Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.304273 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.449665 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w22n\" (UniqueName: \"kubernetes.io/projected/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-kube-api-access-7w22n\") pod \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.449771 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-catalog-content\") pod \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.449814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-utilities\") pod \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\" (UID: \"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d\") " Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.450719 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-utilities" (OuterVolumeSpecName: "utilities") pod "aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" (UID: "aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.454951 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-kube-api-access-7w22n" (OuterVolumeSpecName: "kube-api-access-7w22n") pod "aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" (UID: "aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d"). InnerVolumeSpecName "kube-api-access-7w22n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.547055 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" (UID: "aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.550915 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w22n\" (UniqueName: \"kubernetes.io/projected/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-kube-api-access-7w22n\") on node \"crc\" DevicePath \"\"" Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.550946 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.550957 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.989124 4813 generic.go:334] "Generic (PLEG): container finished" podID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerID="14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4" exitCode=0 Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.989193 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q2wq" Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.989214 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q2wq" event={"ID":"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d","Type":"ContainerDied","Data":"14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4"} Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.990349 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q2wq" event={"ID":"aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d","Type":"ContainerDied","Data":"a2279dc329cb30bec481a80dcfd90c82edda59182f521e4a71d4fb6586eb6e7c"} Dec 02 10:21:03 crc kubenswrapper[4813]: I1202 10:21:03.990375 4813 scope.go:117] "RemoveContainer" containerID="14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.014495 4813 scope.go:117] "RemoveContainer" containerID="cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.018440 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2q2wq"] Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.029894 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2q2wq"] Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.056441 4813 scope.go:117] "RemoveContainer" containerID="38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.070120 4813 scope.go:117] "RemoveContainer" containerID="14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4" Dec 02 10:21:04 crc kubenswrapper[4813]: E1202 10:21:04.070464 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4\": container with ID starting with 14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4 not found: ID does not exist" containerID="14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.070496 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4"} err="failed to get container status \"14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4\": rpc error: code = NotFound desc = could not find container \"14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4\": container with ID starting with 14b94bdfbf80c2eaebbc7aebc4b01dcc3a9f9ba5e3d902aebab7e5b257bb4dd4 not found: ID does not exist" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.070514 4813 scope.go:117] "RemoveContainer" containerID="cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1" Dec 02 10:21:04 crc kubenswrapper[4813]: E1202 10:21:04.070728 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1\": container with ID starting with cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1 not found: ID does not exist" containerID="cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.070748 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1"} err="failed to get container status \"cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1\": rpc error: code = NotFound desc = could not find container \"cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1\": container with ID starting with cf78f41104bfbdc9671f5415d7ec323a0ea4b63a92de02d7698fc8508a1932f1 not found: ID does not exist" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.070760 4813 scope.go:117] "RemoveContainer" containerID="38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2" Dec 02 10:21:04 crc kubenswrapper[4813]: E1202 10:21:04.071143 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2\": container with ID starting with 38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2 not found: ID does not exist" containerID="38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.071163 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2"} err="failed to get container status \"38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2\": rpc error: code = NotFound desc = could not find container \"38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2\": container with ID starting with 38e9035d75cfe6f32aec3a8a8f5f8c138ae6303432de39fb98cd5c1ef3915fe2 not found: ID does not exist" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.074355 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" path="/var/lib/kubelet/pods/aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d/volumes" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.273736 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.273810 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.273851 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.274474 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a79340e298ba50bcecfdcb5460ce49802eb7f560cb68c7596603aaa065bf4488"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.274545 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://a79340e298ba50bcecfdcb5460ce49802eb7f560cb68c7596603aaa065bf4488" gracePeriod=600 Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.998365 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="a79340e298ba50bcecfdcb5460ce49802eb7f560cb68c7596603aaa065bf4488" exitCode=0 Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.998435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"a79340e298ba50bcecfdcb5460ce49802eb7f560cb68c7596603aaa065bf4488"} Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.999414 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"fc43cb3ffc9de648b66f84bc2df392aa68a916bc82756467d6f96c1d37ded85d"} Dec 02 10:21:04 crc kubenswrapper[4813]: I1202 10:21:04.999436 4813 scope.go:117] "RemoveContainer" containerID="f36770541fd368938450466b5c9fcefd3238ed79bdbe160003f69720d79c9545" Dec 02 10:23:04 crc kubenswrapper[4813]: I1202 10:23:04.274003 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:23:04 crc kubenswrapper[4813]: I1202 10:23:04.274936 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:23:34 crc kubenswrapper[4813]: I1202 10:23:34.274147 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:23:34 crc kubenswrapper[4813]: I1202 10:23:34.274734 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.274039 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.274826 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.274914 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.275979 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc43cb3ffc9de648b66f84bc2df392aa68a916bc82756467d6f96c1d37ded85d"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.276110 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://fc43cb3ffc9de648b66f84bc2df392aa68a916bc82756467d6f96c1d37ded85d" gracePeriod=600 Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.929538 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="fc43cb3ffc9de648b66f84bc2df392aa68a916bc82756467d6f96c1d37ded85d" exitCode=0 Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.929630 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"fc43cb3ffc9de648b66f84bc2df392aa68a916bc82756467d6f96c1d37ded85d"} Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.929847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"0c696c00353d51b2859ba7db6d368ec5e4be7615fb6e4b5668381ae90d4c6e32"} Dec 02 10:24:04 crc kubenswrapper[4813]: I1202 10:24:04.929874 4813 scope.go:117] "RemoveContainer" containerID="a79340e298ba50bcecfdcb5460ce49802eb7f560cb68c7596603aaa065bf4488" Dec 02 10:26:04 crc kubenswrapper[4813]: I1202 10:26:04.274332 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:26:04 crc kubenswrapper[4813]: I1202 10:26:04.274906 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:26:34 crc kubenswrapper[4813]: I1202 10:26:34.274250 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:26:34 crc kubenswrapper[4813]: I1202 10:26:34.274857 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:27:04 crc kubenswrapper[4813]: I1202 10:27:04.273761 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:27:04 crc kubenswrapper[4813]: I1202 10:27:04.274450 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:27:04 crc kubenswrapper[4813]: I1202 10:27:04.274502 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:27:04 crc kubenswrapper[4813]: I1202 10:27:04.275174 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c696c00353d51b2859ba7db6d368ec5e4be7615fb6e4b5668381ae90d4c6e32"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:27:04 crc kubenswrapper[4813]: I1202 10:27:04.275237 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://0c696c00353d51b2859ba7db6d368ec5e4be7615fb6e4b5668381ae90d4c6e32" gracePeriod=600 Dec 02 10:27:05 crc kubenswrapper[4813]: I1202 10:27:05.132100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"0c696c00353d51b2859ba7db6d368ec5e4be7615fb6e4b5668381ae90d4c6e32"} Dec 02 10:27:05 crc kubenswrapper[4813]: I1202 10:27:05.132730 4813 scope.go:117] "RemoveContainer" containerID="fc43cb3ffc9de648b66f84bc2df392aa68a916bc82756467d6f96c1d37ded85d" Dec 02 10:27:05 crc kubenswrapper[4813]: I1202 10:27:05.132184 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="0c696c00353d51b2859ba7db6d368ec5e4be7615fb6e4b5668381ae90d4c6e32" exitCode=0 Dec 02 10:27:05 crc kubenswrapper[4813]: I1202 10:27:05.133357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"6026076896f55bb919161f6d03c4a9615a39a32a45726f9be0f5d24c59e6a733"} Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.720846 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747"] Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721642 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721659 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721671 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5e6293-087f-499a-9d48-2887110be409" containerName="extract-utilities" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721681 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5e6293-087f-499a-9d48-2887110be409" containerName="extract-utilities" Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721689 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5e6293-087f-499a-9d48-2887110be409" containerName="extract-content" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721697 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5e6293-087f-499a-9d48-2887110be409" containerName="extract-content" Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721709 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerName="extract-utilities" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721716 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerName="extract-utilities" Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721729 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="extract-content" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721736 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="extract-content" Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721756 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="extract-utilities" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721764 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="extract-utilities" Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721774 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerName="extract-content" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721781 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerName="extract-content" Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721793 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721801 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: E1202 10:29:01.721809 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5e6293-087f-499a-9d48-2887110be409" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721816 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5e6293-087f-499a-9d48-2887110be409" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721928 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5e6293-087f-499a-9d48-2887110be409" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721941 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca3ab13-17f2-4ca5-bf1c-7d58c4824e3d" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.721953 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc96e8d-3067-40f7-b55a-f4d5c83526fc" containerName="registry-server" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.722689 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.725208 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.731106 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747"] Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.893563 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.893631 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59kn\" (UniqueName: \"kubernetes.io/projected/a761174c-6e39-48ec-9b14-1200fbc2daf3-kube-api-access-w59kn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.893676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.994792 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.994889 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.994912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w59kn\" (UniqueName: \"kubernetes.io/projected/a761174c-6e39-48ec-9b14-1200fbc2daf3-kube-api-access-w59kn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.995410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:01 crc kubenswrapper[4813]: I1202 10:29:01.995492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:02 crc kubenswrapper[4813]: I1202 10:29:02.014264 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w59kn\" (UniqueName: \"kubernetes.io/projected/a761174c-6e39-48ec-9b14-1200fbc2daf3-kube-api-access-w59kn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:02 crc kubenswrapper[4813]: I1202 10:29:02.044576 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:02 crc kubenswrapper[4813]: I1202 10:29:02.236499 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747"] Dec 02 10:29:02 crc kubenswrapper[4813]: I1202 10:29:02.763347 4813 generic.go:334] "Generic (PLEG): container finished" podID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerID="8ed2ce22017234a630035452cfa017269308e15ccda673f4656d7d703c723c7f" exitCode=0 Dec 02 10:29:02 crc kubenswrapper[4813]: I1202 10:29:02.763432 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" event={"ID":"a761174c-6e39-48ec-9b14-1200fbc2daf3","Type":"ContainerDied","Data":"8ed2ce22017234a630035452cfa017269308e15ccda673f4656d7d703c723c7f"} Dec 02 10:29:02 crc kubenswrapper[4813]: I1202 10:29:02.763678 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" event={"ID":"a761174c-6e39-48ec-9b14-1200fbc2daf3","Type":"ContainerStarted","Data":"6ee9d287d818bf7abfac08da43058c9855a0c80f56d95e3a8bf0185a88067ffb"} Dec 02 10:29:02 crc kubenswrapper[4813]: I1202 10:29:02.765361 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:29:04 crc kubenswrapper[4813]: I1202 10:29:04.273565 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:29:04 crc kubenswrapper[4813]: I1202 10:29:04.273692 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:29:04 crc kubenswrapper[4813]: I1202 10:29:04.777191 4813 generic.go:334] "Generic (PLEG): container finished" podID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerID="4e3191173bdeea9f873e663a45842f1b3da07122906fb7fdf7152327f360c2c1" exitCode=0 Dec 02 10:29:04 crc kubenswrapper[4813]: I1202 10:29:04.777289 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" event={"ID":"a761174c-6e39-48ec-9b14-1200fbc2daf3","Type":"ContainerDied","Data":"4e3191173bdeea9f873e663a45842f1b3da07122906fb7fdf7152327f360c2c1"} Dec 02 10:29:05 crc kubenswrapper[4813]: I1202 10:29:05.784960 4813 generic.go:334] "Generic (PLEG): container finished" podID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerID="0b591bc2989909fe2e6147f2a3b6f8697b07b03305b225ff347c07dd753f5e8b" exitCode=0 Dec 02 10:29:05 crc kubenswrapper[4813]: I1202 10:29:05.785020 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" event={"ID":"a761174c-6e39-48ec-9b14-1200fbc2daf3","Type":"ContainerDied","Data":"0b591bc2989909fe2e6147f2a3b6f8697b07b03305b225ff347c07dd753f5e8b"} Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.019515 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.156754 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-util\") pod \"a761174c-6e39-48ec-9b14-1200fbc2daf3\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.156883 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w59kn\" (UniqueName: \"kubernetes.io/projected/a761174c-6e39-48ec-9b14-1200fbc2daf3-kube-api-access-w59kn\") pod \"a761174c-6e39-48ec-9b14-1200fbc2daf3\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.156939 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-bundle\") pod \"a761174c-6e39-48ec-9b14-1200fbc2daf3\" (UID: \"a761174c-6e39-48ec-9b14-1200fbc2daf3\") " Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.157977 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-bundle" (OuterVolumeSpecName: "bundle") pod "a761174c-6e39-48ec-9b14-1200fbc2daf3" (UID: "a761174c-6e39-48ec-9b14-1200fbc2daf3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.161908 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a761174c-6e39-48ec-9b14-1200fbc2daf3-kube-api-access-w59kn" (OuterVolumeSpecName: "kube-api-access-w59kn") pod "a761174c-6e39-48ec-9b14-1200fbc2daf3" (UID: "a761174c-6e39-48ec-9b14-1200fbc2daf3"). InnerVolumeSpecName "kube-api-access-w59kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.177961 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-util" (OuterVolumeSpecName: "util") pod "a761174c-6e39-48ec-9b14-1200fbc2daf3" (UID: "a761174c-6e39-48ec-9b14-1200fbc2daf3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.258274 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.258312 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a761174c-6e39-48ec-9b14-1200fbc2daf3-util\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.258327 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w59kn\" (UniqueName: \"kubernetes.io/projected/a761174c-6e39-48ec-9b14-1200fbc2daf3-kube-api-access-w59kn\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.799501 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" event={"ID":"a761174c-6e39-48ec-9b14-1200fbc2daf3","Type":"ContainerDied","Data":"6ee9d287d818bf7abfac08da43058c9855a0c80f56d95e3a8bf0185a88067ffb"} Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.799806 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee9d287d818bf7abfac08da43058c9855a0c80f56d95e3a8bf0185a88067ffb" Dec 02 10:29:07 crc kubenswrapper[4813]: I1202 10:29:07.799589 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.276513 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw"] Dec 02 10:29:09 crc kubenswrapper[4813]: E1202 10:29:09.276747 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerName="pull" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.276761 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerName="pull" Dec 02 10:29:09 crc kubenswrapper[4813]: E1202 10:29:09.276780 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerName="util" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.276786 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerName="util" Dec 02 10:29:09 crc kubenswrapper[4813]: E1202 10:29:09.276795 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerName="extract" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.276801 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerName="extract" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.276885 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a761174c-6e39-48ec-9b14-1200fbc2daf3" containerName="extract" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.277274 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.279054 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wptfh" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.279231 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.279400 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.282564 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flvwr\" (UniqueName: \"kubernetes.io/projected/83de8a03-b86b-4d85-9c7e-92d7b56235c5-kube-api-access-flvwr\") pod \"nmstate-operator-5b5b58f5c8-h9kkw\" (UID: \"83de8a03-b86b-4d85-9c7e-92d7b56235c5\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.289335 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw"] Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.384148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flvwr\" (UniqueName: \"kubernetes.io/projected/83de8a03-b86b-4d85-9c7e-92d7b56235c5-kube-api-access-flvwr\") pod \"nmstate-operator-5b5b58f5c8-h9kkw\" (UID: \"83de8a03-b86b-4d85-9c7e-92d7b56235c5\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.401469 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flvwr\" (UniqueName: \"kubernetes.io/projected/83de8a03-b86b-4d85-9c7e-92d7b56235c5-kube-api-access-flvwr\") pod \"nmstate-operator-5b5b58f5c8-h9kkw\" (UID: \"83de8a03-b86b-4d85-9c7e-92d7b56235c5\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.591667 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw" Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.767382 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw"] Dec 02 10:29:09 crc kubenswrapper[4813]: I1202 10:29:09.810370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw" event={"ID":"83de8a03-b86b-4d85-9c7e-92d7b56235c5","Type":"ContainerStarted","Data":"e3b134c7ce99c91ddba7a5a9a8eb46b12d103fa36252ed4ad994fb762ddddd6c"} Dec 02 10:29:12 crc kubenswrapper[4813]: I1202 10:29:12.827442 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw" event={"ID":"83de8a03-b86b-4d85-9c7e-92d7b56235c5","Type":"ContainerStarted","Data":"12a0fe43a6c9c4d0808ee38a9f9b2dcf733d96b65b12f747b5dc3ebe9c1e23f8"} Dec 02 10:29:12 crc kubenswrapper[4813]: I1202 10:29:12.841399 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-h9kkw" podStartSLOduration=1.920723181 podStartE2EDuration="3.84138103s" podCreationTimestamp="2025-12-02 10:29:09 +0000 UTC" firstStartedPulling="2025-12-02 10:29:09.780316651 +0000 UTC m=+1273.975490953" lastFinishedPulling="2025-12-02 10:29:11.70097447 +0000 UTC m=+1275.896148802" observedRunningTime="2025-12-02 10:29:12.840455783 +0000 UTC m=+1277.035630085" watchObservedRunningTime="2025-12-02 10:29:12.84138103 +0000 UTC m=+1277.036555332" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.718424 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv"] Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.719821 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.721522 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-742bb" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.728907 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j"] Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.729775 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.732289 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv"] Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.733032 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.739796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f26b552b-d766-4432-aba0-6460e8873c7f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vg66j\" (UID: \"f26b552b-d766-4432-aba0-6460e8873c7f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.739849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46dk\" (UniqueName: \"kubernetes.io/projected/f26b552b-d766-4432-aba0-6460e8873c7f-kube-api-access-h46dk\") pod \"nmstate-webhook-5f6d4c5ccb-vg66j\" (UID: \"f26b552b-d766-4432-aba0-6460e8873c7f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.739941 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tl47\" (UniqueName: \"kubernetes.io/projected/382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8-kube-api-access-4tl47\") pod \"nmstate-metrics-7f946cbc9-tdnqv\" (UID: \"382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.755417 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j"] Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.771003 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5zqxz"] Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.771859 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.841441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-ovs-socket\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.841493 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-dbus-socket\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.841571 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h46dk\" (UniqueName: \"kubernetes.io/projected/f26b552b-d766-4432-aba0-6460e8873c7f-kube-api-access-h46dk\") pod \"nmstate-webhook-5f6d4c5ccb-vg66j\" (UID: \"f26b552b-d766-4432-aba0-6460e8873c7f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.841632 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbxq\" (UniqueName: \"kubernetes.io/projected/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-kube-api-access-jxbxq\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.841665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-nmstate-lock\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.841702 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tl47\" (UniqueName: \"kubernetes.io/projected/382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8-kube-api-access-4tl47\") pod \"nmstate-metrics-7f946cbc9-tdnqv\" (UID: \"382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.841853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f26b552b-d766-4432-aba0-6460e8873c7f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vg66j\" (UID: \"f26b552b-d766-4432-aba0-6460e8873c7f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:13 crc kubenswrapper[4813]: E1202 10:29:13.841989 4813 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 02 10:29:13 crc kubenswrapper[4813]: E1202 10:29:13.842053 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f26b552b-d766-4432-aba0-6460e8873c7f-tls-key-pair podName:f26b552b-d766-4432-aba0-6460e8873c7f nodeName:}" failed. No retries permitted until 2025-12-02 10:29:14.342033383 +0000 UTC m=+1278.537207685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/f26b552b-d766-4432-aba0-6460e8873c7f-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-vg66j" (UID: "f26b552b-d766-4432-aba0-6460e8873c7f") : secret "openshift-nmstate-webhook" not found Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.861106 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46dk\" (UniqueName: \"kubernetes.io/projected/f26b552b-d766-4432-aba0-6460e8873c7f-kube-api-access-h46dk\") pod \"nmstate-webhook-5f6d4c5ccb-vg66j\" (UID: \"f26b552b-d766-4432-aba0-6460e8873c7f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.865836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tl47\" (UniqueName: \"kubernetes.io/projected/382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8-kube-api-access-4tl47\") pod \"nmstate-metrics-7f946cbc9-tdnqv\" (UID: \"382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.868258 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq"] Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.868903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.870971 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.871156 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-czlld" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.873515 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.879212 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq"] Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.942691 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c12883-0ec0-42c9-b7ea-581d4cece1f8-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.942746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00c12883-0ec0-42c9-b7ea-581d4cece1f8-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.942842 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-ovs-socket\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.942887 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-dbus-socket\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.942962 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt86q\" (UniqueName: \"kubernetes.io/projected/00c12883-0ec0-42c9-b7ea-581d4cece1f8-kube-api-access-tt86q\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.942980 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-ovs-socket\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.943134 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbxq\" (UniqueName: \"kubernetes.io/projected/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-kube-api-access-jxbxq\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.943188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-nmstate-lock\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.943275 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-nmstate-lock\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.943334 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-dbus-socket\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:13 crc kubenswrapper[4813]: I1202 10:29:13.961474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbxq\" (UniqueName: \"kubernetes.io/projected/4b2cee1e-053a-4b13-ae80-cd3932e3cddb-kube-api-access-jxbxq\") pod \"nmstate-handler-5zqxz\" (UID: \"4b2cee1e-053a-4b13-ae80-cd3932e3cddb\") " pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.035145 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.044578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c12883-0ec0-42c9-b7ea-581d4cece1f8-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.044622 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00c12883-0ec0-42c9-b7ea-581d4cece1f8-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.044655 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt86q\" (UniqueName: \"kubernetes.io/projected/00c12883-0ec0-42c9-b7ea-581d4cece1f8-kube-api-access-tt86q\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.045852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00c12883-0ec0-42c9-b7ea-581d4cece1f8-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.052665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00c12883-0ec0-42c9-b7ea-581d4cece1f8-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.059511 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b88fbc58c-twsmz"] Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.060342 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.077779 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt86q\" (UniqueName: \"kubernetes.io/projected/00c12883-0ec0-42c9-b7ea-581d4cece1f8-kube-api-access-tt86q\") pod \"nmstate-console-plugin-7fbb5f6569-tqbbq\" (UID: \"00c12883-0ec0-42c9-b7ea-581d4cece1f8\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.087036 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.107542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b88fbc58c-twsmz"] Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.153402 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/464065b1-a21b-4aea-b6ad-25e13f3df486-console-oauth-config\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.153502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv76d\" (UniqueName: \"kubernetes.io/projected/464065b1-a21b-4aea-b6ad-25e13f3df486-kube-api-access-mv76d\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.153529 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-service-ca\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.153558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-trusted-ca-bundle\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.153810 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/464065b1-a21b-4aea-b6ad-25e13f3df486-console-serving-cert\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.153854 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-oauth-serving-cert\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.153880 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-console-config\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.215902 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.254624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/464065b1-a21b-4aea-b6ad-25e13f3df486-console-oauth-config\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.254681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv76d\" (UniqueName: \"kubernetes.io/projected/464065b1-a21b-4aea-b6ad-25e13f3df486-kube-api-access-mv76d\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.254699 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-service-ca\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.254717 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-trusted-ca-bundle\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.254795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/464065b1-a21b-4aea-b6ad-25e13f3df486-console-serving-cert\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.254810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-oauth-serving-cert\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.254825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-console-config\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.255633 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-console-config\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.258094 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-trusted-ca-bundle\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.259154 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-service-ca\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.259656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/464065b1-a21b-4aea-b6ad-25e13f3df486-console-oauth-config\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.260172 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/464065b1-a21b-4aea-b6ad-25e13f3df486-oauth-serving-cert\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.261550 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/464065b1-a21b-4aea-b6ad-25e13f3df486-console-serving-cert\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.304022 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv76d\" (UniqueName: \"kubernetes.io/projected/464065b1-a21b-4aea-b6ad-25e13f3df486-kube-api-access-mv76d\") pod \"console-b88fbc58c-twsmz\" (UID: \"464065b1-a21b-4aea-b6ad-25e13f3df486\") " pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.358202 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f26b552b-d766-4432-aba0-6460e8873c7f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vg66j\" (UID: \"f26b552b-d766-4432-aba0-6460e8873c7f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.361374 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f26b552b-d766-4432-aba0-6460e8873c7f-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-vg66j\" (UID: \"f26b552b-d766-4432-aba0-6460e8873c7f\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.415180 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.454460 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq"] Dec 02 10:29:14 crc kubenswrapper[4813]: W1202 10:29:14.457247 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00c12883_0ec0_42c9_b7ea_581d4cece1f8.slice/crio-d2b10acfa837fbd31751362f3a050a0929fd914d23370f9a2ab6fdaa9706b1da WatchSource:0}: Error finding container d2b10acfa837fbd31751362f3a050a0929fd914d23370f9a2ab6fdaa9706b1da: Status 404 returned error can't find the container with id d2b10acfa837fbd31751362f3a050a0929fd914d23370f9a2ab6fdaa9706b1da Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.494284 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv"] Dec 02 10:29:14 crc kubenswrapper[4813]: W1202 10:29:14.501292 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod382b8d39_bbd0_4d7c_8bee_90f0cd26e0b8.slice/crio-4e86911b3fc5e8c6304fa5a2e7766c847d6a2bf8f3e336de2720178dc68b14ba WatchSource:0}: Error finding container 4e86911b3fc5e8c6304fa5a2e7766c847d6a2bf8f3e336de2720178dc68b14ba: Status 404 returned error can't find the container with id 4e86911b3fc5e8c6304fa5a2e7766c847d6a2bf8f3e336de2720178dc68b14ba Dec 02 10:29:14 crc kubenswrapper[4813]: W1202 10:29:14.582534 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod464065b1_a21b_4aea_b6ad_25e13f3df486.slice/crio-38f29d1f1f885531d7f81476187e50fbeaf20d6adfd7081fd434b57f3519542b WatchSource:0}: Error finding container 38f29d1f1f885531d7f81476187e50fbeaf20d6adfd7081fd434b57f3519542b: Status 404 returned error can't find the container with id 38f29d1f1f885531d7f81476187e50fbeaf20d6adfd7081fd434b57f3519542b Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.582868 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b88fbc58c-twsmz"] Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.646894 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.833926 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j"] Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.839361 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" event={"ID":"382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8","Type":"ContainerStarted","Data":"4e86911b3fc5e8c6304fa5a2e7766c847d6a2bf8f3e336de2720178dc68b14ba"} Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.840783 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b88fbc58c-twsmz" event={"ID":"464065b1-a21b-4aea-b6ad-25e13f3df486","Type":"ContainerStarted","Data":"df92dc96efe05fdebc37d59e6a4f57f16f539ad1025dcff66e2115dbbc32f5d0"} Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.840835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b88fbc58c-twsmz" event={"ID":"464065b1-a21b-4aea-b6ad-25e13f3df486","Type":"ContainerStarted","Data":"38f29d1f1f885531d7f81476187e50fbeaf20d6adfd7081fd434b57f3519542b"} Dec 02 10:29:14 crc kubenswrapper[4813]: W1202 10:29:14.843049 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf26b552b_d766_4432_aba0_6460e8873c7f.slice/crio-3b98aa6c5237d0f4a024e70803228b701081349449d445c1a8e03b0789238941 WatchSource:0}: Error finding container 3b98aa6c5237d0f4a024e70803228b701081349449d445c1a8e03b0789238941: Status 404 returned error can't find the container with id 3b98aa6c5237d0f4a024e70803228b701081349449d445c1a8e03b0789238941 Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.845174 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5zqxz" event={"ID":"4b2cee1e-053a-4b13-ae80-cd3932e3cddb","Type":"ContainerStarted","Data":"70564b1cc44de7439a00ad8b1df41a804fa604babf1a65196b52c11cf8c7e053"} Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.846610 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" event={"ID":"00c12883-0ec0-42c9-b7ea-581d4cece1f8","Type":"ContainerStarted","Data":"d2b10acfa837fbd31751362f3a050a0929fd914d23370f9a2ab6fdaa9706b1da"} Dec 02 10:29:14 crc kubenswrapper[4813]: I1202 10:29:14.859898 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b88fbc58c-twsmz" podStartSLOduration=0.859879946 podStartE2EDuration="859.879946ms" podCreationTimestamp="2025-12-02 10:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:29:14.856015366 +0000 UTC m=+1279.051189668" watchObservedRunningTime="2025-12-02 10:29:14.859879946 +0000 UTC m=+1279.055054248" Dec 02 10:29:15 crc kubenswrapper[4813]: I1202 10:29:15.853435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" event={"ID":"f26b552b-d766-4432-aba0-6460e8873c7f","Type":"ContainerStarted","Data":"3b98aa6c5237d0f4a024e70803228b701081349449d445c1a8e03b0789238941"} Dec 02 10:29:16 crc kubenswrapper[4813]: I1202 10:29:16.861946 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" event={"ID":"f26b552b-d766-4432-aba0-6460e8873c7f","Type":"ContainerStarted","Data":"a697b0ed9f926d191700bc4d11e1effa4e681e5ed87be1f6ea1096d3e1e9b3bd"} Dec 02 10:29:16 crc kubenswrapper[4813]: I1202 10:29:16.862323 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:16 crc kubenswrapper[4813]: I1202 10:29:16.865218 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5zqxz" event={"ID":"4b2cee1e-053a-4b13-ae80-cd3932e3cddb","Type":"ContainerStarted","Data":"17c50f7e93778c45e97e881d9ad61a2cff24d03912a636fcc1b07f626fa46551"} Dec 02 10:29:16 crc kubenswrapper[4813]: I1202 10:29:16.865561 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:16 crc kubenswrapper[4813]: I1202 10:29:16.866741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" event={"ID":"382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8","Type":"ContainerStarted","Data":"73a90c9e2a60c1d0034dba6ed90424bf1c17ca8e5323afdaa6a37eb8a4928912"} Dec 02 10:29:16 crc kubenswrapper[4813]: I1202 10:29:16.903960 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5zqxz" podStartSLOduration=1.7570155459999999 podStartE2EDuration="3.903935385s" podCreationTimestamp="2025-12-02 10:29:13 +0000 UTC" firstStartedPulling="2025-12-02 10:29:14.114258007 +0000 UTC m=+1278.309432299" lastFinishedPulling="2025-12-02 10:29:16.261177836 +0000 UTC m=+1280.456352138" observedRunningTime="2025-12-02 10:29:16.90201927 +0000 UTC m=+1281.097193582" watchObservedRunningTime="2025-12-02 10:29:16.903935385 +0000 UTC m=+1281.099109687" Dec 02 10:29:16 crc kubenswrapper[4813]: I1202 10:29:16.906717 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" podStartSLOduration=2.488197652 podStartE2EDuration="3.906703624s" podCreationTimestamp="2025-12-02 10:29:13 +0000 UTC" firstStartedPulling="2025-12-02 10:29:14.845250068 +0000 UTC m=+1279.040424370" lastFinishedPulling="2025-12-02 10:29:16.26375604 +0000 UTC m=+1280.458930342" observedRunningTime="2025-12-02 10:29:16.887479465 +0000 UTC m=+1281.082653797" watchObservedRunningTime="2025-12-02 10:29:16.906703624 +0000 UTC m=+1281.101877926" Dec 02 10:29:17 crc kubenswrapper[4813]: I1202 10:29:17.874236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" event={"ID":"00c12883-0ec0-42c9-b7ea-581d4cece1f8","Type":"ContainerStarted","Data":"671b39db6aeea2c37dc8a3ea03698401b29437f27a7c42be3d57da11e38c38dd"} Dec 02 10:29:17 crc kubenswrapper[4813]: I1202 10:29:17.890430 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tqbbq" podStartSLOduration=2.094308841 podStartE2EDuration="4.890406213s" podCreationTimestamp="2025-12-02 10:29:13 +0000 UTC" firstStartedPulling="2025-12-02 10:29:14.458563522 +0000 UTC m=+1278.653737824" lastFinishedPulling="2025-12-02 10:29:17.254660904 +0000 UTC m=+1281.449835196" observedRunningTime="2025-12-02 10:29:17.889657251 +0000 UTC m=+1282.084831573" watchObservedRunningTime="2025-12-02 10:29:17.890406213 +0000 UTC m=+1282.085580535" Dec 02 10:29:18 crc kubenswrapper[4813]: I1202 10:29:18.883397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" event={"ID":"382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8","Type":"ContainerStarted","Data":"7f2334bc7d0d1fc6972b59bddd75a26d6c683d3f13de48756835eff78b0969f0"} Dec 02 10:29:18 crc kubenswrapper[4813]: I1202 10:29:18.903813 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-tdnqv" podStartSLOduration=1.800268913 podStartE2EDuration="5.903726337s" podCreationTimestamp="2025-12-02 10:29:13 +0000 UTC" firstStartedPulling="2025-12-02 10:29:14.503817686 +0000 UTC m=+1278.698991988" lastFinishedPulling="2025-12-02 10:29:18.60727511 +0000 UTC m=+1282.802449412" observedRunningTime="2025-12-02 10:29:18.899687732 +0000 UTC m=+1283.094862034" watchObservedRunningTime="2025-12-02 10:29:18.903726337 +0000 UTC m=+1283.098900639" Dec 02 10:29:24 crc kubenswrapper[4813]: I1202 10:29:24.109644 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5zqxz" Dec 02 10:29:24 crc kubenswrapper[4813]: I1202 10:29:24.417178 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:24 crc kubenswrapper[4813]: I1202 10:29:24.417222 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:24 crc kubenswrapper[4813]: I1202 10:29:24.423342 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:24 crc kubenswrapper[4813]: I1202 10:29:24.917614 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b88fbc58c-twsmz" Dec 02 10:29:24 crc kubenswrapper[4813]: I1202 10:29:24.965981 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8dtjd"] Dec 02 10:29:34 crc kubenswrapper[4813]: I1202 10:29:34.273610 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:29:34 crc kubenswrapper[4813]: I1202 10:29:34.274160 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:29:34 crc kubenswrapper[4813]: I1202 10:29:34.655953 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-vg66j" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.009082 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr"] Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.012174 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.014365 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.022226 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr"] Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.208885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58f46\" (UniqueName: \"kubernetes.io/projected/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-kube-api-access-58f46\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.208982 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.209016 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.309946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.309999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.310111 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58f46\" (UniqueName: \"kubernetes.io/projected/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-kube-api-access-58f46\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.310548 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.310613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.332322 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58f46\" (UniqueName: \"kubernetes.io/projected/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-kube-api-access-58f46\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.630266 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:47 crc kubenswrapper[4813]: I1202 10:29:47.834781 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr"] Dec 02 10:29:48 crc kubenswrapper[4813]: I1202 10:29:48.043794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" event={"ID":"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef","Type":"ContainerStarted","Data":"51c192d738e39d94ab8bcf47e6a7c3b8a4ae3441bbf1709586ceee1727a5cec9"} Dec 02 10:29:49 crc kubenswrapper[4813]: I1202 10:29:49.052045 4813 generic.go:334] "Generic (PLEG): container finished" podID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerID="29708e22af0e99cd9807544ef0f9d3e0ffe63e6759487028611544c7971c8dbd" exitCode=0 Dec 02 10:29:49 crc kubenswrapper[4813]: I1202 10:29:49.052126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" event={"ID":"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef","Type":"ContainerDied","Data":"29708e22af0e99cd9807544ef0f9d3e0ffe63e6759487028611544c7971c8dbd"} Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.014272 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8dtjd" podUID="e967798d-a0d2-40e4-af66-ba0d04ac8318" containerName="console" containerID="cri-o://f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c" gracePeriod=15 Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.361364 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8dtjd_e967798d-a0d2-40e4-af66-ba0d04ac8318/console/0.log" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.361750 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.562592 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-serving-cert\") pod \"e967798d-a0d2-40e4-af66-ba0d04ac8318\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.562719 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbg7v\" (UniqueName: \"kubernetes.io/projected/e967798d-a0d2-40e4-af66-ba0d04ac8318-kube-api-access-vbg7v\") pod \"e967798d-a0d2-40e4-af66-ba0d04ac8318\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.562743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-oauth-serving-cert\") pod \"e967798d-a0d2-40e4-af66-ba0d04ac8318\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.562770 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-oauth-config\") pod \"e967798d-a0d2-40e4-af66-ba0d04ac8318\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.562841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-service-ca\") pod \"e967798d-a0d2-40e4-af66-ba0d04ac8318\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.562891 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-config\") pod \"e967798d-a0d2-40e4-af66-ba0d04ac8318\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.562915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-trusted-ca-bundle\") pod \"e967798d-a0d2-40e4-af66-ba0d04ac8318\" (UID: \"e967798d-a0d2-40e4-af66-ba0d04ac8318\") " Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.563980 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e967798d-a0d2-40e4-af66-ba0d04ac8318" (UID: "e967798d-a0d2-40e4-af66-ba0d04ac8318"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.564102 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-config" (OuterVolumeSpecName: "console-config") pod "e967798d-a0d2-40e4-af66-ba0d04ac8318" (UID: "e967798d-a0d2-40e4-af66-ba0d04ac8318"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.564146 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-service-ca" (OuterVolumeSpecName: "service-ca") pod "e967798d-a0d2-40e4-af66-ba0d04ac8318" (UID: "e967798d-a0d2-40e4-af66-ba0d04ac8318"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.564376 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e967798d-a0d2-40e4-af66-ba0d04ac8318" (UID: "e967798d-a0d2-40e4-af66-ba0d04ac8318"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.568559 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e967798d-a0d2-40e4-af66-ba0d04ac8318" (UID: "e967798d-a0d2-40e4-af66-ba0d04ac8318"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.568824 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e967798d-a0d2-40e4-af66-ba0d04ac8318-kube-api-access-vbg7v" (OuterVolumeSpecName: "kube-api-access-vbg7v") pod "e967798d-a0d2-40e4-af66-ba0d04ac8318" (UID: "e967798d-a0d2-40e4-af66-ba0d04ac8318"). InnerVolumeSpecName "kube-api-access-vbg7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.569112 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e967798d-a0d2-40e4-af66-ba0d04ac8318" (UID: "e967798d-a0d2-40e4-af66-ba0d04ac8318"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.663823 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.663860 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbg7v\" (UniqueName: \"kubernetes.io/projected/e967798d-a0d2-40e4-af66-ba0d04ac8318-kube-api-access-vbg7v\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.663874 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.663886 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.663897 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.663906 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:50 crc kubenswrapper[4813]: I1202 10:29:50.663916 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e967798d-a0d2-40e4-af66-ba0d04ac8318-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.064513 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8dtjd_e967798d-a0d2-40e4-af66-ba0d04ac8318/console/0.log" Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.064563 4813 generic.go:334] "Generic (PLEG): container finished" podID="e967798d-a0d2-40e4-af66-ba0d04ac8318" containerID="f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c" exitCode=2 Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.064627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8dtjd" event={"ID":"e967798d-a0d2-40e4-af66-ba0d04ac8318","Type":"ContainerDied","Data":"f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c"} Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.064654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8dtjd" event={"ID":"e967798d-a0d2-40e4-af66-ba0d04ac8318","Type":"ContainerDied","Data":"4fcef7e8789191e7a118d58407efae3fd16c7caafe706a872f38e4d2c6fd1ad1"} Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.064656 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8dtjd" Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.064670 4813 scope.go:117] "RemoveContainer" containerID="f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c" Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.066368 4813 generic.go:334] "Generic (PLEG): container finished" podID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerID="b13ee783decc08182a7a45bd335f9dd584288295217d1519487486e0bebe20c4" exitCode=0 Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.066401 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" event={"ID":"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef","Type":"ContainerDied","Data":"b13ee783decc08182a7a45bd335f9dd584288295217d1519487486e0bebe20c4"} Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.094924 4813 scope.go:117] "RemoveContainer" containerID="f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c" Dec 02 10:29:51 crc kubenswrapper[4813]: E1202 10:29:51.095358 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c\": container with ID starting with f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c not found: ID does not exist" containerID="f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c" Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.095394 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c"} err="failed to get container status \"f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c\": rpc error: code = NotFound desc = could not find container \"f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c\": container with ID starting with f9edf37a61b41883df29416366f01f74e45eec839065a783899b6fcb24de2b6c not found: ID does not exist" Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.100199 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8dtjd"] Dec 02 10:29:51 crc kubenswrapper[4813]: I1202 10:29:51.104200 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8dtjd"] Dec 02 10:29:52 crc kubenswrapper[4813]: I1202 10:29:52.073992 4813 generic.go:334] "Generic (PLEG): container finished" podID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerID="6f5b9b49afaca2220325b26d13363c71f22863be64849ab5965f2dd472982c25" exitCode=0 Dec 02 10:29:52 crc kubenswrapper[4813]: I1202 10:29:52.075111 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e967798d-a0d2-40e4-af66-ba0d04ac8318" path="/var/lib/kubelet/pods/e967798d-a0d2-40e4-af66-ba0d04ac8318/volumes" Dec 02 10:29:52 crc kubenswrapper[4813]: I1202 10:29:52.075642 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" event={"ID":"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef","Type":"ContainerDied","Data":"6f5b9b49afaca2220325b26d13363c71f22863be64849ab5965f2dd472982c25"} Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.292439 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.299483 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58f46\" (UniqueName: \"kubernetes.io/projected/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-kube-api-access-58f46\") pod \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.299535 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-util\") pod \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.299566 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-bundle\") pod \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\" (UID: \"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef\") " Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.300618 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-bundle" (OuterVolumeSpecName: "bundle") pod "a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" (UID: "a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.305427 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-kube-api-access-58f46" (OuterVolumeSpecName: "kube-api-access-58f46") pod "a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" (UID: "a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef"). InnerVolumeSpecName "kube-api-access-58f46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.313899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-util" (OuterVolumeSpecName: "util") pod "a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" (UID: "a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.400931 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58f46\" (UniqueName: \"kubernetes.io/projected/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-kube-api-access-58f46\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.400974 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-util\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:53 crc kubenswrapper[4813]: I1202 10:29:53.400984 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:29:54 crc kubenswrapper[4813]: I1202 10:29:54.086480 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" event={"ID":"a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef","Type":"ContainerDied","Data":"51c192d738e39d94ab8bcf47e6a7c3b8a4ae3441bbf1709586ceee1727a5cec9"} Dec 02 10:29:54 crc kubenswrapper[4813]: I1202 10:29:54.086527 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51c192d738e39d94ab8bcf47e6a7c3b8a4ae3441bbf1709586ceee1727a5cec9" Dec 02 10:29:54 crc kubenswrapper[4813]: I1202 10:29:54.086556 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.126014 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr"] Dec 02 10:30:00 crc kubenswrapper[4813]: E1202 10:30:00.126808 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerName="util" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.126825 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerName="util" Dec 02 10:30:00 crc kubenswrapper[4813]: E1202 10:30:00.126855 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerName="extract" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.126863 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerName="extract" Dec 02 10:30:00 crc kubenswrapper[4813]: E1202 10:30:00.126873 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e967798d-a0d2-40e4-af66-ba0d04ac8318" containerName="console" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.126880 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e967798d-a0d2-40e4-af66-ba0d04ac8318" containerName="console" Dec 02 10:30:00 crc kubenswrapper[4813]: E1202 10:30:00.126895 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerName="pull" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.126902 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerName="pull" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.127010 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e967798d-a0d2-40e4-af66-ba0d04ac8318" containerName="console" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.127027 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef" containerName="extract" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.127591 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.129845 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.130212 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.139332 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr"] Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.277903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4c2118-f078-4715-9077-171056d01b9b-secret-volume\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.278112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkt2m\" (UniqueName: \"kubernetes.io/projected/aa4c2118-f078-4715-9077-171056d01b9b-kube-api-access-mkt2m\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.278188 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4c2118-f078-4715-9077-171056d01b9b-config-volume\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.379088 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4c2118-f078-4715-9077-171056d01b9b-secret-volume\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.379179 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkt2m\" (UniqueName: \"kubernetes.io/projected/aa4c2118-f078-4715-9077-171056d01b9b-kube-api-access-mkt2m\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.379213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4c2118-f078-4715-9077-171056d01b9b-config-volume\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.380332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4c2118-f078-4715-9077-171056d01b9b-config-volume\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.388210 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4c2118-f078-4715-9077-171056d01b9b-secret-volume\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.420049 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkt2m\" (UniqueName: \"kubernetes.io/projected/aa4c2118-f078-4715-9077-171056d01b9b-kube-api-access-mkt2m\") pod \"collect-profiles-29411190-nz6rr\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.458256 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:00 crc kubenswrapper[4813]: I1202 10:30:00.877711 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr"] Dec 02 10:30:01 crc kubenswrapper[4813]: I1202 10:30:01.124708 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" event={"ID":"aa4c2118-f078-4715-9077-171056d01b9b","Type":"ContainerStarted","Data":"e804c712acd104654d7ade4ba9b7883f653f7423ae5a524fec41dc586d571286"} Dec 02 10:30:01 crc kubenswrapper[4813]: I1202 10:30:01.125165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" event={"ID":"aa4c2118-f078-4715-9077-171056d01b9b","Type":"ContainerStarted","Data":"3baff0d433392c8b4048fa762ff56b514e4d6aff3c0de6a5533bbc576e5e016f"} Dec 02 10:30:01 crc kubenswrapper[4813]: I1202 10:30:01.142882 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" podStartSLOduration=1.142862957 podStartE2EDuration="1.142862957s" podCreationTimestamp="2025-12-02 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:30:01.141280842 +0000 UTC m=+1325.336455144" watchObservedRunningTime="2025-12-02 10:30:01.142862957 +0000 UTC m=+1325.338037259" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.120949 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r"] Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.122098 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.124702 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.124972 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.125146 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.125439 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fv7p4" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.125597 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.142628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r"] Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.147102 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa4c2118-f078-4715-9077-171056d01b9b" containerID="e804c712acd104654d7ade4ba9b7883f653f7423ae5a524fec41dc586d571286" exitCode=0 Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.147183 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" event={"ID":"aa4c2118-f078-4715-9077-171056d01b9b","Type":"ContainerDied","Data":"e804c712acd104654d7ade4ba9b7883f653f7423ae5a524fec41dc586d571286"} Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.306199 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz594\" (UniqueName: \"kubernetes.io/projected/73fd072b-2a9c-4d81-968f-86bd031430af-kube-api-access-jz594\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.306274 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73fd072b-2a9c-4d81-968f-86bd031430af-apiservice-cert\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.306332 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73fd072b-2a9c-4d81-968f-86bd031430af-webhook-cert\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.408016 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz594\" (UniqueName: \"kubernetes.io/projected/73fd072b-2a9c-4d81-968f-86bd031430af-kube-api-access-jz594\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.408099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73fd072b-2a9c-4d81-968f-86bd031430af-apiservice-cert\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.408152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73fd072b-2a9c-4d81-968f-86bd031430af-webhook-cert\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.414769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73fd072b-2a9c-4d81-968f-86bd031430af-apiservice-cert\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.415428 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73fd072b-2a9c-4d81-968f-86bd031430af-webhook-cert\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.448211 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz594\" (UniqueName: \"kubernetes.io/projected/73fd072b-2a9c-4d81-968f-86bd031430af-kube-api-access-jz594\") pod \"metallb-operator-controller-manager-7ff5887cf9-z4k7r\" (UID: \"73fd072b-2a9c-4d81-968f-86bd031430af\") " pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.449001 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.494545 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69"] Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.495774 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.499277 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4twnp" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.499575 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.499725 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.504332 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69"] Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.610564 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkq86\" (UniqueName: \"kubernetes.io/projected/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-kube-api-access-dkq86\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.611031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-webhook-cert\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.611067 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-apiservice-cert\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.716348 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-webhook-cert\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.716415 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-apiservice-cert\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.716528 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkq86\" (UniqueName: \"kubernetes.io/projected/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-kube-api-access-dkq86\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.721791 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-webhook-cert\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.721844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-apiservice-cert\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.747946 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkq86\" (UniqueName: \"kubernetes.io/projected/4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9-kube-api-access-dkq86\") pod \"metallb-operator-webhook-server-7c5b954b74-kvm69\" (UID: \"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9\") " pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.840560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:02 crc kubenswrapper[4813]: I1202 10:30:02.995723 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r"] Dec 02 10:30:03 crc kubenswrapper[4813]: W1202 10:30:03.008477 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73fd072b_2a9c_4d81_968f_86bd031430af.slice/crio-614c9a6a092b95942c43daad6f70e63813b11775196616e507319386d35f5c87 WatchSource:0}: Error finding container 614c9a6a092b95942c43daad6f70e63813b11775196616e507319386d35f5c87: Status 404 returned error can't find the container with id 614c9a6a092b95942c43daad6f70e63813b11775196616e507319386d35f5c87 Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.154235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" event={"ID":"73fd072b-2a9c-4d81-968f-86bd031430af","Type":"ContainerStarted","Data":"614c9a6a092b95942c43daad6f70e63813b11775196616e507319386d35f5c87"} Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.189799 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69"] Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.391728 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.527755 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4c2118-f078-4715-9077-171056d01b9b-config-volume\") pod \"aa4c2118-f078-4715-9077-171056d01b9b\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.527856 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4c2118-f078-4715-9077-171056d01b9b-secret-volume\") pod \"aa4c2118-f078-4715-9077-171056d01b9b\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.527883 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkt2m\" (UniqueName: \"kubernetes.io/projected/aa4c2118-f078-4715-9077-171056d01b9b-kube-api-access-mkt2m\") pod \"aa4c2118-f078-4715-9077-171056d01b9b\" (UID: \"aa4c2118-f078-4715-9077-171056d01b9b\") " Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.528537 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4c2118-f078-4715-9077-171056d01b9b-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa4c2118-f078-4715-9077-171056d01b9b" (UID: "aa4c2118-f078-4715-9077-171056d01b9b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.529350 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4c2118-f078-4715-9077-171056d01b9b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.533264 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4c2118-f078-4715-9077-171056d01b9b-kube-api-access-mkt2m" (OuterVolumeSpecName: "kube-api-access-mkt2m") pod "aa4c2118-f078-4715-9077-171056d01b9b" (UID: "aa4c2118-f078-4715-9077-171056d01b9b"). InnerVolumeSpecName "kube-api-access-mkt2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.533550 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4c2118-f078-4715-9077-171056d01b9b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa4c2118-f078-4715-9077-171056d01b9b" (UID: "aa4c2118-f078-4715-9077-171056d01b9b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.630698 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4c2118-f078-4715-9077-171056d01b9b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:30:03 crc kubenswrapper[4813]: I1202 10:30:03.630737 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkt2m\" (UniqueName: \"kubernetes.io/projected/aa4c2118-f078-4715-9077-171056d01b9b-kube-api-access-mkt2m\") on node \"crc\" DevicePath \"\"" Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.161117 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.161064 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr" event={"ID":"aa4c2118-f078-4715-9077-171056d01b9b","Type":"ContainerDied","Data":"3baff0d433392c8b4048fa762ff56b514e4d6aff3c0de6a5533bbc576e5e016f"} Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.161678 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3baff0d433392c8b4048fa762ff56b514e4d6aff3c0de6a5533bbc576e5e016f" Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.162634 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" event={"ID":"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9","Type":"ContainerStarted","Data":"dfde0868d3a9fcba9e198d8fece8b79709db7140e3033a50649b626e0b57083e"} Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.273337 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.273421 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.273461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.273979 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6026076896f55bb919161f6d03c4a9615a39a32a45726f9be0f5d24c59e6a733"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:30:04 crc kubenswrapper[4813]: I1202 10:30:04.274036 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://6026076896f55bb919161f6d03c4a9615a39a32a45726f9be0f5d24c59e6a733" gracePeriod=600 Dec 02 10:30:05 crc kubenswrapper[4813]: I1202 10:30:05.172581 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="6026076896f55bb919161f6d03c4a9615a39a32a45726f9be0f5d24c59e6a733" exitCode=0 Dec 02 10:30:05 crc kubenswrapper[4813]: I1202 10:30:05.172687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"6026076896f55bb919161f6d03c4a9615a39a32a45726f9be0f5d24c59e6a733"} Dec 02 10:30:05 crc kubenswrapper[4813]: I1202 10:30:05.174104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376"} Dec 02 10:30:05 crc kubenswrapper[4813]: I1202 10:30:05.174202 4813 scope.go:117] "RemoveContainer" containerID="0c696c00353d51b2859ba7db6d368ec5e4be7615fb6e4b5668381ae90d4c6e32" Dec 02 10:30:09 crc kubenswrapper[4813]: I1202 10:30:09.197745 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" event={"ID":"73fd072b-2a9c-4d81-968f-86bd031430af","Type":"ContainerStarted","Data":"4c5c970a1e59043a84d4b608b8e5bafe6dd38deb47c1d2720061b395aea87863"} Dec 02 10:30:09 crc kubenswrapper[4813]: I1202 10:30:09.198368 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:09 crc kubenswrapper[4813]: I1202 10:30:09.199497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" event={"ID":"4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9","Type":"ContainerStarted","Data":"67753a949e20e5c2a13800fffd8184b60688185833c693c991ad8b0ccc54e380"} Dec 02 10:30:09 crc kubenswrapper[4813]: I1202 10:30:09.199665 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:09 crc kubenswrapper[4813]: I1202 10:30:09.216160 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" podStartSLOduration=1.668703614 podStartE2EDuration="7.216142574s" podCreationTimestamp="2025-12-02 10:30:02 +0000 UTC" firstStartedPulling="2025-12-02 10:30:03.02146284 +0000 UTC m=+1327.216637142" lastFinishedPulling="2025-12-02 10:30:08.5689018 +0000 UTC m=+1332.764076102" observedRunningTime="2025-12-02 10:30:09.21497924 +0000 UTC m=+1333.410153553" watchObservedRunningTime="2025-12-02 10:30:09.216142574 +0000 UTC m=+1333.411316876" Dec 02 10:30:09 crc kubenswrapper[4813]: I1202 10:30:09.236925 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" podStartSLOduration=1.8673152370000001 podStartE2EDuration="7.236910188s" podCreationTimestamp="2025-12-02 10:30:02 +0000 UTC" firstStartedPulling="2025-12-02 10:30:03.213851466 +0000 UTC m=+1327.409025768" lastFinishedPulling="2025-12-02 10:30:08.583446417 +0000 UTC m=+1332.778620719" observedRunningTime="2025-12-02 10:30:09.232239594 +0000 UTC m=+1333.427413906" watchObservedRunningTime="2025-12-02 10:30:09.236910188 +0000 UTC m=+1333.432084480" Dec 02 10:30:22 crc kubenswrapper[4813]: I1202 10:30:22.844791 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7c5b954b74-kvm69" Dec 02 10:30:42 crc kubenswrapper[4813]: I1202 10:30:42.452922 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7ff5887cf9-z4k7r" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.138155 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-46r87"] Dec 02 10:30:43 crc kubenswrapper[4813]: E1202 10:30:43.138762 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4c2118-f078-4715-9077-171056d01b9b" containerName="collect-profiles" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.138782 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4c2118-f078-4715-9077-171056d01b9b" containerName="collect-profiles" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.138918 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4c2118-f078-4715-9077-171056d01b9b" containerName="collect-profiles" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.141178 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.144967 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz"] Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.145724 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.146746 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.146998 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-pzdlk" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.147997 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.148266 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.161715 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz"] Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.214761 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2ssx2"] Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.215674 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.218949 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.219101 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.219101 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pgtvg" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.219317 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.238193 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-rjbv4"] Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.239043 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.243918 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.259771 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-rjbv4"] Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290152 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn6b5\" (UniqueName: \"kubernetes.io/projected/19477a8e-e8a1-43b4-9272-3f1394a27c60-kube-api-access-mn6b5\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-metrics\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-reloader\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290380 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-sockets\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290432 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-startup\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-conf\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290530 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19477a8e-e8a1-43b4-9272-3f1394a27c60-metrics-certs\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290565 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg4jn\" (UniqueName: \"kubernetes.io/projected/d71d0a99-b714-49c3-abbd-0e9bcf238c38-kube-api-access-qg4jn\") pod \"frr-k8s-webhook-server-7fcb986d4-kvvcz\" (UID: \"d71d0a99-b714-49c3-abbd-0e9bcf238c38\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.290594 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71d0a99-b714-49c3-abbd-0e9bcf238c38-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kvvcz\" (UID: \"d71d0a99-b714-49c3-abbd-0e9bcf238c38\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.392690 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.392794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-metrics-certs\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.392830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kts9k\" (UniqueName: \"kubernetes.io/projected/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-kube-api-access-kts9k\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.392862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19477a8e-e8a1-43b4-9272-3f1394a27c60-metrics-certs\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.392978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg4jn\" (UniqueName: \"kubernetes.io/projected/d71d0a99-b714-49c3-abbd-0e9bcf238c38-kube-api-access-qg4jn\") pod \"frr-k8s-webhook-server-7fcb986d4-kvvcz\" (UID: \"d71d0a99-b714-49c3-abbd-0e9bcf238c38\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393064 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71d0a99-b714-49c3-abbd-0e9bcf238c38-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kvvcz\" (UID: \"d71d0a99-b714-49c3-abbd-0e9bcf238c38\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn6b5\" (UniqueName: \"kubernetes.io/projected/19477a8e-e8a1-43b4-9272-3f1394a27c60-kube-api-access-mn6b5\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393206 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-cert\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393250 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-metrics\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-metrics-certs\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-reloader\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6vt\" (UniqueName: \"kubernetes.io/projected/ea89adf2-768b-41ee-93f1-1a52803ace65-kube-api-access-8h6vt\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393604 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ea89adf2-768b-41ee-93f1-1a52803ace65-metallb-excludel2\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393777 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-sockets\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-startup\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.393865 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-conf\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.394343 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-sockets\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.394344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-reloader\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.394394 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-conf\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.394449 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/19477a8e-e8a1-43b4-9272-3f1394a27c60-metrics\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.394912 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/19477a8e-e8a1-43b4-9272-3f1394a27c60-frr-startup\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.400977 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19477a8e-e8a1-43b4-9272-3f1394a27c60-metrics-certs\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.402871 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71d0a99-b714-49c3-abbd-0e9bcf238c38-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kvvcz\" (UID: \"d71d0a99-b714-49c3-abbd-0e9bcf238c38\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.423378 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn6b5\" (UniqueName: \"kubernetes.io/projected/19477a8e-e8a1-43b4-9272-3f1394a27c60-kube-api-access-mn6b5\") pod \"frr-k8s-46r87\" (UID: \"19477a8e-e8a1-43b4-9272-3f1394a27c60\") " pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.424761 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg4jn\" (UniqueName: \"kubernetes.io/projected/d71d0a99-b714-49c3-abbd-0e9bcf238c38-kube-api-access-qg4jn\") pod \"frr-k8s-webhook-server-7fcb986d4-kvvcz\" (UID: \"d71d0a99-b714-49c3-abbd-0e9bcf238c38\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.466172 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-46r87" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.475061 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.495424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ea89adf2-768b-41ee-93f1-1a52803ace65-metallb-excludel2\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.495549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.495595 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-metrics-certs\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.495638 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kts9k\" (UniqueName: \"kubernetes.io/projected/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-kube-api-access-kts9k\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.495679 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-cert\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.495713 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-metrics-certs\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.495743 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6vt\" (UniqueName: \"kubernetes.io/projected/ea89adf2-768b-41ee-93f1-1a52803ace65-kube-api-access-8h6vt\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.498242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ea89adf2-768b-41ee-93f1-1a52803ace65-metallb-excludel2\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: E1202 10:30:43.498380 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 10:30:43 crc kubenswrapper[4813]: E1202 10:30:43.498440 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist podName:ea89adf2-768b-41ee-93f1-1a52803ace65 nodeName:}" failed. No retries permitted until 2025-12-02 10:30:43.998420902 +0000 UTC m=+1368.193595214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist") pod "speaker-2ssx2" (UID: "ea89adf2-768b-41ee-93f1-1a52803ace65") : secret "metallb-memberlist" not found Dec 02 10:30:43 crc kubenswrapper[4813]: E1202 10:30:43.498722 4813 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 02 10:30:43 crc kubenswrapper[4813]: E1202 10:30:43.498758 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-metrics-certs podName:ea89adf2-768b-41ee-93f1-1a52803ace65 nodeName:}" failed. No retries permitted until 2025-12-02 10:30:43.998746051 +0000 UTC m=+1368.193920353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-metrics-certs") pod "speaker-2ssx2" (UID: "ea89adf2-768b-41ee-93f1-1a52803ace65") : secret "speaker-certs-secret" not found Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.505191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-metrics-certs\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.505478 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-cert\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.516488 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kts9k\" (UniqueName: \"kubernetes.io/projected/d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353-kube-api-access-kts9k\") pod \"controller-f8648f98b-rjbv4\" (UID: \"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353\") " pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.520304 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6vt\" (UniqueName: \"kubernetes.io/projected/ea89adf2-768b-41ee-93f1-1a52803ace65-kube-api-access-8h6vt\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.553005 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.898426 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz"] Dec 02 10:30:43 crc kubenswrapper[4813]: I1202 10:30:43.978299 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-rjbv4"] Dec 02 10:30:43 crc kubenswrapper[4813]: W1202 10:30:43.986802 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6902fe1_ed9a_4cbf_80b2_3b1f31d4d353.slice/crio-99d0c1a27eb881367513aff720824e691ca08f547f19540ef1de06e632461e5e WatchSource:0}: Error finding container 99d0c1a27eb881367513aff720824e691ca08f547f19540ef1de06e632461e5e: Status 404 returned error can't find the container with id 99d0c1a27eb881367513aff720824e691ca08f547f19540ef1de06e632461e5e Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.001895 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.001992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-metrics-certs\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:44 crc kubenswrapper[4813]: E1202 10:30:44.002063 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 10:30:44 crc kubenswrapper[4813]: E1202 10:30:44.002188 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist podName:ea89adf2-768b-41ee-93f1-1a52803ace65 nodeName:}" failed. No retries permitted until 2025-12-02 10:30:45.002165409 +0000 UTC m=+1369.197339891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist") pod "speaker-2ssx2" (UID: "ea89adf2-768b-41ee-93f1-1a52803ace65") : secret "metallb-memberlist" not found Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.006449 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-metrics-certs\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.431343 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rjbv4" event={"ID":"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353","Type":"ContainerStarted","Data":"7d0f5052d855c9e06d85734daf9fa796728201054076c6b8525cb0bf581f2619"} Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.431656 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rjbv4" event={"ID":"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353","Type":"ContainerStarted","Data":"ac2b009aaba01b58dedb14fa36b70f8715dc87febf3bb1feeff61860a60feb3a"} Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.431667 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rjbv4" event={"ID":"d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353","Type":"ContainerStarted","Data":"99d0c1a27eb881367513aff720824e691ca08f547f19540ef1de06e632461e5e"} Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.431995 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.433352 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerStarted","Data":"407910751b88d1ebc0db55f24365b1799a008d50ceca9b2e9b6875d9060141cb"} Dec 02 10:30:44 crc kubenswrapper[4813]: I1202 10:30:44.434383 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" event={"ID":"d71d0a99-b714-49c3-abbd-0e9bcf238c38","Type":"ContainerStarted","Data":"dc8f81f6638d4d2707aa3526f8fa6a534da1ad1b64ac3597f151ed61cc1c3ba2"} Dec 02 10:30:45 crc kubenswrapper[4813]: I1202 10:30:45.013463 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:45 crc kubenswrapper[4813]: I1202 10:30:45.025699 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ea89adf2-768b-41ee-93f1-1a52803ace65-memberlist\") pod \"speaker-2ssx2\" (UID: \"ea89adf2-768b-41ee-93f1-1a52803ace65\") " pod="metallb-system/speaker-2ssx2" Dec 02 10:30:45 crc kubenswrapper[4813]: I1202 10:30:45.029708 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2ssx2" Dec 02 10:30:45 crc kubenswrapper[4813]: I1202 10:30:45.443101 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2ssx2" event={"ID":"ea89adf2-768b-41ee-93f1-1a52803ace65","Type":"ContainerStarted","Data":"c8b2f41675a15d2919b03a9a2af758794ed6558fce0354068a8a90d66269cb4c"} Dec 02 10:30:45 crc kubenswrapper[4813]: I1202 10:30:45.443487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2ssx2" event={"ID":"ea89adf2-768b-41ee-93f1-1a52803ace65","Type":"ContainerStarted","Data":"bb61940301a14be949763255d0bd917348599564c95d19ccd6837733a7e1d32c"} Dec 02 10:30:46 crc kubenswrapper[4813]: I1202 10:30:46.087693 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-rjbv4" podStartSLOduration=3.087671343 podStartE2EDuration="3.087671343s" podCreationTimestamp="2025-12-02 10:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:30:44.454451952 +0000 UTC m=+1368.649626264" watchObservedRunningTime="2025-12-02 10:30:46.087671343 +0000 UTC m=+1370.282845645" Dec 02 10:30:46 crc kubenswrapper[4813]: I1202 10:30:46.471592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2ssx2" event={"ID":"ea89adf2-768b-41ee-93f1-1a52803ace65","Type":"ContainerStarted","Data":"44363260f5a3a6fa3ac03b2d19402037bf22dbcef241f2e7576ec762507e55d7"} Dec 02 10:30:46 crc kubenswrapper[4813]: I1202 10:30:46.472522 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2ssx2" Dec 02 10:30:46 crc kubenswrapper[4813]: I1202 10:30:46.504003 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2ssx2" podStartSLOduration=3.503980387 podStartE2EDuration="3.503980387s" podCreationTimestamp="2025-12-02 10:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:30:46.499117728 +0000 UTC m=+1370.694292030" watchObservedRunningTime="2025-12-02 10:30:46.503980387 +0000 UTC m=+1370.699154689" Dec 02 10:30:47 crc kubenswrapper[4813]: I1202 10:30:47.806776 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pgwk5"] Dec 02 10:30:47 crc kubenswrapper[4813]: I1202 10:30:47.808178 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:47 crc kubenswrapper[4813]: I1202 10:30:47.823973 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgwk5"] Dec 02 10:30:47 crc kubenswrapper[4813]: I1202 10:30:47.961196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c972b\" (UniqueName: \"kubernetes.io/projected/68124d51-9609-456c-a38e-b7107ccf22eb-kube-api-access-c972b\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:47 crc kubenswrapper[4813]: I1202 10:30:47.961280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-catalog-content\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:47 crc kubenswrapper[4813]: I1202 10:30:47.961320 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-utilities\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.062707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c972b\" (UniqueName: \"kubernetes.io/projected/68124d51-9609-456c-a38e-b7107ccf22eb-kube-api-access-c972b\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.063123 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-catalog-content\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.063174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-utilities\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.063756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-utilities\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.064335 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-catalog-content\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.089950 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c972b\" (UniqueName: \"kubernetes.io/projected/68124d51-9609-456c-a38e-b7107ccf22eb-kube-api-access-c972b\") pod \"certified-operators-pgwk5\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.133351 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.410632 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgwk5"] Dec 02 10:30:48 crc kubenswrapper[4813]: I1202 10:30:48.492584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgwk5" event={"ID":"68124d51-9609-456c-a38e-b7107ccf22eb","Type":"ContainerStarted","Data":"e0e9572af4f33f34e38460020b25081b2da9cc1369f6e42596bb232bca02d9c1"} Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.134377 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hhcm6"] Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.135785 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.158234 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhcm6"] Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.305855 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-catalog-content\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.305968 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slgv4\" (UniqueName: \"kubernetes.io/projected/d825a460-a5da-4a58-a5f9-fb938ccac9d8-kube-api-access-slgv4\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.306011 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-utilities\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.407512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-utilities\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.407612 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-catalog-content\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.407685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slgv4\" (UniqueName: \"kubernetes.io/projected/d825a460-a5da-4a58-a5f9-fb938ccac9d8-kube-api-access-slgv4\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.408202 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-utilities\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.624016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-catalog-content\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.624968 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slgv4\" (UniqueName: \"kubernetes.io/projected/d825a460-a5da-4a58-a5f9-fb938ccac9d8-kube-api-access-slgv4\") pod \"redhat-operators-hhcm6\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:51 crc kubenswrapper[4813]: I1202 10:30:51.754711 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:30:52 crc kubenswrapper[4813]: I1202 10:30:52.514089 4813 generic.go:334] "Generic (PLEG): container finished" podID="68124d51-9609-456c-a38e-b7107ccf22eb" containerID="ca82769fbfb2b30c5914adc77fe614ca5be4c2d1e02efbcf9b640bfa54836b05" exitCode=0 Dec 02 10:30:52 crc kubenswrapper[4813]: I1202 10:30:52.514160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgwk5" event={"ID":"68124d51-9609-456c-a38e-b7107ccf22eb","Type":"ContainerDied","Data":"ca82769fbfb2b30c5914adc77fe614ca5be4c2d1e02efbcf9b640bfa54836b05"} Dec 02 10:30:55 crc kubenswrapper[4813]: I1202 10:30:55.034145 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2ssx2" Dec 02 10:30:55 crc kubenswrapper[4813]: I1202 10:30:55.940415 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhcm6"] Dec 02 10:30:55 crc kubenswrapper[4813]: W1202 10:30:55.943877 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd825a460_a5da_4a58_a5f9_fb938ccac9d8.slice/crio-fd625c938a45ea701f7949a49f1fae4dfd60c94fd23b771611d769ef4ca7656c WatchSource:0}: Error finding container fd625c938a45ea701f7949a49f1fae4dfd60c94fd23b771611d769ef4ca7656c: Status 404 returned error can't find the container with id fd625c938a45ea701f7949a49f1fae4dfd60c94fd23b771611d769ef4ca7656c Dec 02 10:30:56 crc kubenswrapper[4813]: I1202 10:30:56.541835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhcm6" event={"ID":"d825a460-a5da-4a58-a5f9-fb938ccac9d8","Type":"ContainerStarted","Data":"fd625c938a45ea701f7949a49f1fae4dfd60c94fd23b771611d769ef4ca7656c"} Dec 02 10:30:57 crc kubenswrapper[4813]: I1202 10:30:57.854930 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qkvgq"] Dec 02 10:30:57 crc kubenswrapper[4813]: I1202 10:30:57.856368 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qkvgq" Dec 02 10:30:57 crc kubenswrapper[4813]: I1202 10:30:57.858890 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 02 10:30:57 crc kubenswrapper[4813]: I1202 10:30:57.859755 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-d4xz8" Dec 02 10:30:57 crc kubenswrapper[4813]: I1202 10:30:57.860474 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 02 10:30:57 crc kubenswrapper[4813]: I1202 10:30:57.882636 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qkvgq"] Dec 02 10:30:57 crc kubenswrapper[4813]: I1202 10:30:57.923328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlcbq\" (UniqueName: \"kubernetes.io/projected/3a8b0429-c553-4909-9ef5-39819f2f44f2-kube-api-access-rlcbq\") pod \"openstack-operator-index-qkvgq\" (UID: \"3a8b0429-c553-4909-9ef5-39819f2f44f2\") " pod="openstack-operators/openstack-operator-index-qkvgq" Dec 02 10:30:58 crc kubenswrapper[4813]: I1202 10:30:58.024868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlcbq\" (UniqueName: \"kubernetes.io/projected/3a8b0429-c553-4909-9ef5-39819f2f44f2-kube-api-access-rlcbq\") pod \"openstack-operator-index-qkvgq\" (UID: \"3a8b0429-c553-4909-9ef5-39819f2f44f2\") " pod="openstack-operators/openstack-operator-index-qkvgq" Dec 02 10:30:58 crc kubenswrapper[4813]: I1202 10:30:58.050601 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlcbq\" (UniqueName: \"kubernetes.io/projected/3a8b0429-c553-4909-9ef5-39819f2f44f2-kube-api-access-rlcbq\") pod \"openstack-operator-index-qkvgq\" (UID: \"3a8b0429-c553-4909-9ef5-39819f2f44f2\") " pod="openstack-operators/openstack-operator-index-qkvgq" Dec 02 10:30:58 crc kubenswrapper[4813]: I1202 10:30:58.187696 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qkvgq" Dec 02 10:30:58 crc kubenswrapper[4813]: I1202 10:30:58.574901 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qkvgq"] Dec 02 10:30:59 crc kubenswrapper[4813]: I1202 10:30:59.560407 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qkvgq" event={"ID":"3a8b0429-c553-4909-9ef5-39819f2f44f2","Type":"ContainerStarted","Data":"5284d2e9edf8a0f0f36a60c9528106837818152c0413c86e67f3f75e75173451"} Dec 02 10:30:59 crc kubenswrapper[4813]: I1202 10:30:59.562289 4813 generic.go:334] "Generic (PLEG): container finished" podID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerID="070bc5fe1dbe6c5874254a52638a628bfddc8b593fcf11109db3c61cba391006" exitCode=0 Dec 02 10:30:59 crc kubenswrapper[4813]: I1202 10:30:59.562377 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhcm6" event={"ID":"d825a460-a5da-4a58-a5f9-fb938ccac9d8","Type":"ContainerDied","Data":"070bc5fe1dbe6c5874254a52638a628bfddc8b593fcf11109db3c61cba391006"} Dec 02 10:30:59 crc kubenswrapper[4813]: I1202 10:30:59.564358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" event={"ID":"d71d0a99-b714-49c3-abbd-0e9bcf238c38","Type":"ContainerStarted","Data":"d48f66448df101c766ae45e80a7035f99937005775967f6ea65c504e39c09b66"} Dec 02 10:31:00 crc kubenswrapper[4813]: I1202 10:31:00.570618 4813 generic.go:334] "Generic (PLEG): container finished" podID="19477a8e-e8a1-43b4-9272-3f1394a27c60" containerID="1204c25237fc1cc2840d79068254d39c903f801dd298450b8f254214391f1914" exitCode=0 Dec 02 10:31:00 crc kubenswrapper[4813]: I1202 10:31:00.570675 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerDied","Data":"1204c25237fc1cc2840d79068254d39c903f801dd298450b8f254214391f1914"} Dec 02 10:31:00 crc kubenswrapper[4813]: I1202 10:31:00.572998 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:31:00 crc kubenswrapper[4813]: I1202 10:31:00.613572 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" podStartSLOduration=2.545546639 podStartE2EDuration="17.613546756s" podCreationTimestamp="2025-12-02 10:30:43 +0000 UTC" firstStartedPulling="2025-12-02 10:30:43.902769734 +0000 UTC m=+1368.097944036" lastFinishedPulling="2025-12-02 10:30:58.970769861 +0000 UTC m=+1383.165944153" observedRunningTime="2025-12-02 10:31:00.612248668 +0000 UTC m=+1384.807422970" watchObservedRunningTime="2025-12-02 10:31:00.613546756 +0000 UTC m=+1384.808721078" Dec 02 10:31:01 crc kubenswrapper[4813]: I1202 10:31:01.526248 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qkvgq"] Dec 02 10:31:01 crc kubenswrapper[4813]: I1202 10:31:01.578225 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgwk5" event={"ID":"68124d51-9609-456c-a38e-b7107ccf22eb","Type":"ContainerStarted","Data":"6c7ceac948532eb8e787baceceec58cf88893a8e17ffa361cd847fab320a73ef"} Dec 02 10:31:01 crc kubenswrapper[4813]: I1202 10:31:01.579574 4813 generic.go:334] "Generic (PLEG): container finished" podID="19477a8e-e8a1-43b4-9272-3f1394a27c60" containerID="794b8e2f7bfed1e4962b98d1cc9ef42a123927e3f4b573587199b5fc1f7c7e75" exitCode=0 Dec 02 10:31:01 crc kubenswrapper[4813]: I1202 10:31:01.579641 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerDied","Data":"794b8e2f7bfed1e4962b98d1cc9ef42a123927e3f4b573587199b5fc1f7c7e75"} Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.336737 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6v8v4"] Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.338349 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.365238 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6v8v4"] Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.484888 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c86jc\" (UniqueName: \"kubernetes.io/projected/3b20cb4e-064c-45e2-b461-ddb692c11924-kube-api-access-c86jc\") pod \"openstack-operator-index-6v8v4\" (UID: \"3b20cb4e-064c-45e2-b461-ddb692c11924\") " pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.585880 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c86jc\" (UniqueName: \"kubernetes.io/projected/3b20cb4e-064c-45e2-b461-ddb692c11924-kube-api-access-c86jc\") pod \"openstack-operator-index-6v8v4\" (UID: \"3b20cb4e-064c-45e2-b461-ddb692c11924\") " pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.591277 4813 generic.go:334] "Generic (PLEG): container finished" podID="68124d51-9609-456c-a38e-b7107ccf22eb" containerID="6c7ceac948532eb8e787baceceec58cf88893a8e17ffa361cd847fab320a73ef" exitCode=0 Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.591376 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgwk5" event={"ID":"68124d51-9609-456c-a38e-b7107ccf22eb","Type":"ContainerDied","Data":"6c7ceac948532eb8e787baceceec58cf88893a8e17ffa361cd847fab320a73ef"} Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.615542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c86jc\" (UniqueName: \"kubernetes.io/projected/3b20cb4e-064c-45e2-b461-ddb692c11924-kube-api-access-c86jc\") pod \"openstack-operator-index-6v8v4\" (UID: \"3b20cb4e-064c-45e2-b461-ddb692c11924\") " pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:02 crc kubenswrapper[4813]: I1202 10:31:02.665578 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:03 crc kubenswrapper[4813]: I1202 10:31:03.101956 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6v8v4"] Dec 02 10:31:03 crc kubenswrapper[4813]: W1202 10:31:03.372920 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b20cb4e_064c_45e2_b461_ddb692c11924.slice/crio-79f51f7956559f5d3ca97585c481d321a2961ef29a559c75ea244379c3e73c08 WatchSource:0}: Error finding container 79f51f7956559f5d3ca97585c481d321a2961ef29a559c75ea244379c3e73c08: Status 404 returned error can't find the container with id 79f51f7956559f5d3ca97585c481d321a2961ef29a559c75ea244379c3e73c08 Dec 02 10:31:03 crc kubenswrapper[4813]: I1202 10:31:03.557988 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-rjbv4" Dec 02 10:31:03 crc kubenswrapper[4813]: I1202 10:31:03.641056 4813 generic.go:334] "Generic (PLEG): container finished" podID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerID="d1ea8a8bef8bba0b9ea45a7de974bc0e16261d8b80995ea19ec255d87ba2b87f" exitCode=0 Dec 02 10:31:03 crc kubenswrapper[4813]: I1202 10:31:03.641763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhcm6" event={"ID":"d825a460-a5da-4a58-a5f9-fb938ccac9d8","Type":"ContainerDied","Data":"d1ea8a8bef8bba0b9ea45a7de974bc0e16261d8b80995ea19ec255d87ba2b87f"} Dec 02 10:31:03 crc kubenswrapper[4813]: I1202 10:31:03.676359 4813 generic.go:334] "Generic (PLEG): container finished" podID="19477a8e-e8a1-43b4-9272-3f1394a27c60" containerID="1d34ef6f0b5938c587deeafa12f167fcd56de01fe0c34a273aec0801069e6d85" exitCode=0 Dec 02 10:31:03 crc kubenswrapper[4813]: I1202 10:31:03.676474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerDied","Data":"1d34ef6f0b5938c587deeafa12f167fcd56de01fe0c34a273aec0801069e6d85"} Dec 02 10:31:03 crc kubenswrapper[4813]: I1202 10:31:03.684478 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6v8v4" event={"ID":"3b20cb4e-064c-45e2-b461-ddb692c11924","Type":"ContainerStarted","Data":"79f51f7956559f5d3ca97585c481d321a2961ef29a559c75ea244379c3e73c08"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.704533 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgwk5" event={"ID":"68124d51-9609-456c-a38e-b7107ccf22eb","Type":"ContainerStarted","Data":"ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.706394 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qkvgq" event={"ID":"3a8b0429-c553-4909-9ef5-39819f2f44f2","Type":"ContainerStarted","Data":"43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.706530 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qkvgq" podUID="3a8b0429-c553-4909-9ef5-39819f2f44f2" containerName="registry-server" containerID="cri-o://43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc" gracePeriod=2 Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.712496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhcm6" event={"ID":"d825a460-a5da-4a58-a5f9-fb938ccac9d8","Type":"ContainerStarted","Data":"6d3aff5fd50d6e4e27477eb3f4d77295228cca552f5eba7aca9be2505a709ce7"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.721607 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerStarted","Data":"d2cab55bdd1d743d098d8aa1337afd9c34796ef11b63e1417176426706312ec9"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.721660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerStarted","Data":"0a2e17042f94fdeafe17ce8105203232410f5bc3bf5206634f70a2e1535975c1"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.721673 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerStarted","Data":"f49949f6b5f030050ed44b7ff684c3a12631e275797cf73e88b973c4f76f1961"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.721687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerStarted","Data":"b50ecd7d82dc4badfb2622745b1ec41602d8189febdf4feb23cfbb5807f172a1"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.724463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6v8v4" event={"ID":"3b20cb4e-064c-45e2-b461-ddb692c11924","Type":"ContainerStarted","Data":"8d4a4b843cef3fbc5f9ee7068da4da99120724f94f71051a0ae7e24dee9196d1"} Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.732003 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pgwk5" podStartSLOduration=10.283552286 podStartE2EDuration="18.731940829s" podCreationTimestamp="2025-12-02 10:30:47 +0000 UTC" firstStartedPulling="2025-12-02 10:30:55.562044828 +0000 UTC m=+1379.757219140" lastFinishedPulling="2025-12-02 10:31:04.010433371 +0000 UTC m=+1388.205607683" observedRunningTime="2025-12-02 10:31:05.730520629 +0000 UTC m=+1389.925694931" watchObservedRunningTime="2025-12-02 10:31:05.731940829 +0000 UTC m=+1389.927115141" Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.755902 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qkvgq" podStartSLOduration=2.48331131 podStartE2EDuration="8.755873954s" podCreationTimestamp="2025-12-02 10:30:57 +0000 UTC" firstStartedPulling="2025-12-02 10:30:58.583311903 +0000 UTC m=+1382.778486215" lastFinishedPulling="2025-12-02 10:31:04.855874557 +0000 UTC m=+1389.051048859" observedRunningTime="2025-12-02 10:31:05.748168494 +0000 UTC m=+1389.943342796" watchObservedRunningTime="2025-12-02 10:31:05.755873954 +0000 UTC m=+1389.951048266" Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.774546 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hhcm6" podStartSLOduration=9.483567636 podStartE2EDuration="14.774525858s" podCreationTimestamp="2025-12-02 10:30:51 +0000 UTC" firstStartedPulling="2025-12-02 10:30:59.56369488 +0000 UTC m=+1383.758869182" lastFinishedPulling="2025-12-02 10:31:04.854653102 +0000 UTC m=+1389.049827404" observedRunningTime="2025-12-02 10:31:05.770236425 +0000 UTC m=+1389.965410737" watchObservedRunningTime="2025-12-02 10:31:05.774525858 +0000 UTC m=+1389.969700160" Dec 02 10:31:05 crc kubenswrapper[4813]: I1202 10:31:05.798235 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6v8v4" podStartSLOduration=2.306640337 podStartE2EDuration="3.798205565s" podCreationTimestamp="2025-12-02 10:31:02 +0000 UTC" firstStartedPulling="2025-12-02 10:31:03.376474297 +0000 UTC m=+1387.571648599" lastFinishedPulling="2025-12-02 10:31:04.868039525 +0000 UTC m=+1389.063213827" observedRunningTime="2025-12-02 10:31:05.791790452 +0000 UTC m=+1389.986964754" watchObservedRunningTime="2025-12-02 10:31:05.798205565 +0000 UTC m=+1389.993379897" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.151270 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qkvgq" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.346473 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlcbq\" (UniqueName: \"kubernetes.io/projected/3a8b0429-c553-4909-9ef5-39819f2f44f2-kube-api-access-rlcbq\") pod \"3a8b0429-c553-4909-9ef5-39819f2f44f2\" (UID: \"3a8b0429-c553-4909-9ef5-39819f2f44f2\") " Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.354390 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8b0429-c553-4909-9ef5-39819f2f44f2-kube-api-access-rlcbq" (OuterVolumeSpecName: "kube-api-access-rlcbq") pod "3a8b0429-c553-4909-9ef5-39819f2f44f2" (UID: "3a8b0429-c553-4909-9ef5-39819f2f44f2"). InnerVolumeSpecName "kube-api-access-rlcbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.448771 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlcbq\" (UniqueName: \"kubernetes.io/projected/3a8b0429-c553-4909-9ef5-39819f2f44f2-kube-api-access-rlcbq\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.732550 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a8b0429-c553-4909-9ef5-39819f2f44f2" containerID="43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc" exitCode=0 Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.732600 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qkvgq" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.732686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qkvgq" event={"ID":"3a8b0429-c553-4909-9ef5-39819f2f44f2","Type":"ContainerDied","Data":"43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc"} Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.732739 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qkvgq" event={"ID":"3a8b0429-c553-4909-9ef5-39819f2f44f2","Type":"ContainerDied","Data":"5284d2e9edf8a0f0f36a60c9528106837818152c0413c86e67f3f75e75173451"} Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.732764 4813 scope.go:117] "RemoveContainer" containerID="43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.744452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerStarted","Data":"9d21fbd2481ff889d0da4b412db2c168880438769920434b875168d9bc0ee066"} Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.744519 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-46r87" event={"ID":"19477a8e-e8a1-43b4-9272-3f1394a27c60","Type":"ContainerStarted","Data":"7a37ce69859bd1465f0adc4bd5490065c6c537474d7d220763afebbd74a55baa"} Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.757542 4813 scope.go:117] "RemoveContainer" containerID="43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc" Dec 02 10:31:06 crc kubenswrapper[4813]: E1202 10:31:06.758214 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc\": container with ID starting with 43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc not found: ID does not exist" containerID="43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.758255 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc"} err="failed to get container status \"43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc\": rpc error: code = NotFound desc = could not find container \"43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc\": container with ID starting with 43fbf8ddfc1a91a773663f5361a1ffc217d12cbbc6b3a5dd6084f3435ac127dc not found: ID does not exist" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.775856 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-46r87" podStartSLOduration=8.44201389 podStartE2EDuration="23.775833374s" podCreationTimestamp="2025-12-02 10:30:43 +0000 UTC" firstStartedPulling="2025-12-02 10:30:43.657163885 +0000 UTC m=+1367.852338187" lastFinishedPulling="2025-12-02 10:30:58.990983369 +0000 UTC m=+1383.186157671" observedRunningTime="2025-12-02 10:31:06.772467547 +0000 UTC m=+1390.967641869" watchObservedRunningTime="2025-12-02 10:31:06.775833374 +0000 UTC m=+1390.971007676" Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.790135 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qkvgq"] Dec 02 10:31:06 crc kubenswrapper[4813]: I1202 10:31:06.794797 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qkvgq"] Dec 02 10:31:07 crc kubenswrapper[4813]: I1202 10:31:07.752220 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-46r87" Dec 02 10:31:08 crc kubenswrapper[4813]: I1202 10:31:08.074648 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8b0429-c553-4909-9ef5-39819f2f44f2" path="/var/lib/kubelet/pods/3a8b0429-c553-4909-9ef5-39819f2f44f2/volumes" Dec 02 10:31:08 crc kubenswrapper[4813]: I1202 10:31:08.134476 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:31:08 crc kubenswrapper[4813]: I1202 10:31:08.135424 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:31:08 crc kubenswrapper[4813]: I1202 10:31:08.189850 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:31:08 crc kubenswrapper[4813]: I1202 10:31:08.466585 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-46r87" Dec 02 10:31:08 crc kubenswrapper[4813]: I1202 10:31:08.517580 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-46r87" Dec 02 10:31:09 crc kubenswrapper[4813]: I1202 10:31:09.805058 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:31:10 crc kubenswrapper[4813]: I1202 10:31:10.930860 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pgwk5"] Dec 02 10:31:11 crc kubenswrapper[4813]: I1202 10:31:11.756204 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:31:11 crc kubenswrapper[4813]: I1202 10:31:11.756273 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:31:11 crc kubenswrapper[4813]: I1202 10:31:11.772378 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pgwk5" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="registry-server" containerID="cri-o://ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b" gracePeriod=2 Dec 02 10:31:11 crc kubenswrapper[4813]: I1202 10:31:11.812461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:31:11 crc kubenswrapper[4813]: I1202 10:31:11.848679 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:31:12 crc kubenswrapper[4813]: I1202 10:31:12.666434 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:12 crc kubenswrapper[4813]: I1202 10:31:12.666784 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:12 crc kubenswrapper[4813]: I1202 10:31:12.705468 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:12 crc kubenswrapper[4813]: I1202 10:31:12.808632 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6v8v4" Dec 02 10:31:13 crc kubenswrapper[4813]: I1202 10:31:13.484245 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kvvcz" Dec 02 10:31:15 crc kubenswrapper[4813]: I1202 10:31:15.730571 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhcm6"] Dec 02 10:31:15 crc kubenswrapper[4813]: I1202 10:31:15.732768 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hhcm6" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerName="registry-server" containerID="cri-o://6d3aff5fd50d6e4e27477eb3f4d77295228cca552f5eba7aca9be2505a709ce7" gracePeriod=2 Dec 02 10:31:15 crc kubenswrapper[4813]: I1202 10:31:15.801218 4813 generic.go:334] "Generic (PLEG): container finished" podID="68124d51-9609-456c-a38e-b7107ccf22eb" containerID="ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b" exitCode=0 Dec 02 10:31:15 crc kubenswrapper[4813]: I1202 10:31:15.801272 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgwk5" event={"ID":"68124d51-9609-456c-a38e-b7107ccf22eb","Type":"ContainerDied","Data":"ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b"} Dec 02 10:31:18 crc kubenswrapper[4813]: E1202 10:31:18.134308 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b is running failed: container process not found" containerID="ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 10:31:18 crc kubenswrapper[4813]: E1202 10:31:18.135013 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b is running failed: container process not found" containerID="ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 10:31:18 crc kubenswrapper[4813]: E1202 10:31:18.135410 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b is running failed: container process not found" containerID="ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 10:31:18 crc kubenswrapper[4813]: E1202 10:31:18.135461 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-pgwk5" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="registry-server" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.623969 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.814539 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c972b\" (UniqueName: \"kubernetes.io/projected/68124d51-9609-456c-a38e-b7107ccf22eb-kube-api-access-c972b\") pod \"68124d51-9609-456c-a38e-b7107ccf22eb\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.815362 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-utilities\") pod \"68124d51-9609-456c-a38e-b7107ccf22eb\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.815471 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-catalog-content\") pod \"68124d51-9609-456c-a38e-b7107ccf22eb\" (UID: \"68124d51-9609-456c-a38e-b7107ccf22eb\") " Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.816164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-utilities" (OuterVolumeSpecName: "utilities") pod "68124d51-9609-456c-a38e-b7107ccf22eb" (UID: "68124d51-9609-456c-a38e-b7107ccf22eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.836415 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68124d51-9609-456c-a38e-b7107ccf22eb-kube-api-access-c972b" (OuterVolumeSpecName: "kube-api-access-c972b") pod "68124d51-9609-456c-a38e-b7107ccf22eb" (UID: "68124d51-9609-456c-a38e-b7107ccf22eb"). InnerVolumeSpecName "kube-api-access-c972b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.856153 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgwk5" event={"ID":"68124d51-9609-456c-a38e-b7107ccf22eb","Type":"ContainerDied","Data":"e0e9572af4f33f34e38460020b25081b2da9cc1369f6e42596bb232bca02d9c1"} Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.856227 4813 scope.go:117] "RemoveContainer" containerID="ba84559bb0280dadacfa848d24bb3e7e93b91f587962084a890a72673331872b" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.856408 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgwk5" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.859776 4813 generic.go:334] "Generic (PLEG): container finished" podID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerID="6d3aff5fd50d6e4e27477eb3f4d77295228cca552f5eba7aca9be2505a709ce7" exitCode=0 Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.859836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhcm6" event={"ID":"d825a460-a5da-4a58-a5f9-fb938ccac9d8","Type":"ContainerDied","Data":"6d3aff5fd50d6e4e27477eb3f4d77295228cca552f5eba7aca9be2505a709ce7"} Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.860586 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68124d51-9609-456c-a38e-b7107ccf22eb" (UID: "68124d51-9609-456c-a38e-b7107ccf22eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.872777 4813 scope.go:117] "RemoveContainer" containerID="6c7ceac948532eb8e787baceceec58cf88893a8e17ffa361cd847fab320a73ef" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.886430 4813 scope.go:117] "RemoveContainer" containerID="ca82769fbfb2b30c5914adc77fe614ca5be4c2d1e02efbcf9b640bfa54836b05" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.916923 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.916977 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68124d51-9609-456c-a38e-b7107ccf22eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:18 crc kubenswrapper[4813]: I1202 10:31:18.917002 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c972b\" (UniqueName: \"kubernetes.io/projected/68124d51-9609-456c-a38e-b7107ccf22eb-kube-api-access-c972b\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.075044 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.186309 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pgwk5"] Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.190003 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pgwk5"] Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.220722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slgv4\" (UniqueName: \"kubernetes.io/projected/d825a460-a5da-4a58-a5f9-fb938ccac9d8-kube-api-access-slgv4\") pod \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.220883 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-catalog-content\") pod \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.220946 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-utilities\") pod \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\" (UID: \"d825a460-a5da-4a58-a5f9-fb938ccac9d8\") " Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.222300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-utilities" (OuterVolumeSpecName: "utilities") pod "d825a460-a5da-4a58-a5f9-fb938ccac9d8" (UID: "d825a460-a5da-4a58-a5f9-fb938ccac9d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.223506 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d825a460-a5da-4a58-a5f9-fb938ccac9d8-kube-api-access-slgv4" (OuterVolumeSpecName: "kube-api-access-slgv4") pod "d825a460-a5da-4a58-a5f9-fb938ccac9d8" (UID: "d825a460-a5da-4a58-a5f9-fb938ccac9d8"). InnerVolumeSpecName "kube-api-access-slgv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.322871 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.322906 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slgv4\" (UniqueName: \"kubernetes.io/projected/d825a460-a5da-4a58-a5f9-fb938ccac9d8-kube-api-access-slgv4\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.325560 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d825a460-a5da-4a58-a5f9-fb938ccac9d8" (UID: "d825a460-a5da-4a58-a5f9-fb938ccac9d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.423817 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d825a460-a5da-4a58-a5f9-fb938ccac9d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.873613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhcm6" event={"ID":"d825a460-a5da-4a58-a5f9-fb938ccac9d8","Type":"ContainerDied","Data":"fd625c938a45ea701f7949a49f1fae4dfd60c94fd23b771611d769ef4ca7656c"} Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.873666 4813 scope.go:117] "RemoveContainer" containerID="6d3aff5fd50d6e4e27477eb3f4d77295228cca552f5eba7aca9be2505a709ce7" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.873709 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhcm6" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.892502 4813 scope.go:117] "RemoveContainer" containerID="d1ea8a8bef8bba0b9ea45a7de974bc0e16261d8b80995ea19ec255d87ba2b87f" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.922567 4813 scope.go:117] "RemoveContainer" containerID="070bc5fe1dbe6c5874254a52638a628bfddc8b593fcf11109db3c61cba391006" Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.927262 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhcm6"] Dec 02 10:31:19 crc kubenswrapper[4813]: I1202 10:31:19.932445 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hhcm6"] Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.080076 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" path="/var/lib/kubelet/pods/68124d51-9609-456c-a38e-b7107ccf22eb/volumes" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.081285 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" path="/var/lib/kubelet/pods/d825a460-a5da-4a58-a5f9-fb938ccac9d8/volumes" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771303 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n"] Dec 02 10:31:20 crc kubenswrapper[4813]: E1202 10:31:20.771609 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="extract-utilities" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771625 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="extract-utilities" Dec 02 10:31:20 crc kubenswrapper[4813]: E1202 10:31:20.771638 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b0429-c553-4909-9ef5-39819f2f44f2" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771646 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b0429-c553-4909-9ef5-39819f2f44f2" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: E1202 10:31:20.771658 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerName="extract-utilities" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771667 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerName="extract-utilities" Dec 02 10:31:20 crc kubenswrapper[4813]: E1202 10:31:20.771682 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerName="extract-content" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771689 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerName="extract-content" Dec 02 10:31:20 crc kubenswrapper[4813]: E1202 10:31:20.771706 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="extract-content" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771714 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="extract-content" Dec 02 10:31:20 crc kubenswrapper[4813]: E1202 10:31:20.771727 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771734 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: E1202 10:31:20.771747 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771754 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771879 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d825a460-a5da-4a58-a5f9-fb938ccac9d8" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771896 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="68124d51-9609-456c-a38e-b7107ccf22eb" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.771905 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8b0429-c553-4909-9ef5-39819f2f44f2" containerName="registry-server" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.772906 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.777450 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-48hz7" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.787771 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n"] Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.944878 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-bundle\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.944935 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpg9\" (UniqueName: \"kubernetes.io/projected/2b32306d-7ca9-4dff-8119-7502681bc325-kube-api-access-2kpg9\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:20 crc kubenswrapper[4813]: I1202 10:31:20.944987 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-util\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.045979 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kpg9\" (UniqueName: \"kubernetes.io/projected/2b32306d-7ca9-4dff-8119-7502681bc325-kube-api-access-2kpg9\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.046032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-util\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.046121 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-bundle\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.046661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-bundle\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.047317 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-util\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.075556 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kpg9\" (UniqueName: \"kubernetes.io/projected/2b32306d-7ca9-4dff-8119-7502681bc325-kube-api-access-2kpg9\") pod \"a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.092050 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.497835 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n"] Dec 02 10:31:21 crc kubenswrapper[4813]: I1202 10:31:21.886820 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" event={"ID":"2b32306d-7ca9-4dff-8119-7502681bc325","Type":"ContainerStarted","Data":"bb5d41901aa041f566aac9a798960ace3c7d5efd7434338be3b23a6de3f31a81"} Dec 02 10:31:22 crc kubenswrapper[4813]: I1202 10:31:22.895745 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b32306d-7ca9-4dff-8119-7502681bc325" containerID="0f5da9920ea076dfa7d7a82370f4b22ea9dd0d098c9d85a89a256ecea48d7e79" exitCode=0 Dec 02 10:31:22 crc kubenswrapper[4813]: I1202 10:31:22.895833 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" event={"ID":"2b32306d-7ca9-4dff-8119-7502681bc325","Type":"ContainerDied","Data":"0f5da9920ea076dfa7d7a82370f4b22ea9dd0d098c9d85a89a256ecea48d7e79"} Dec 02 10:31:23 crc kubenswrapper[4813]: I1202 10:31:23.468403 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-46r87" Dec 02 10:31:24 crc kubenswrapper[4813]: I1202 10:31:24.911715 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b32306d-7ca9-4dff-8119-7502681bc325" containerID="029e09b1429193ff0f661c368bdf48edd867ef432b71be6fd054713520469376" exitCode=0 Dec 02 10:31:24 crc kubenswrapper[4813]: I1202 10:31:24.911845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" event={"ID":"2b32306d-7ca9-4dff-8119-7502681bc325","Type":"ContainerDied","Data":"029e09b1429193ff0f661c368bdf48edd867ef432b71be6fd054713520469376"} Dec 02 10:31:25 crc kubenswrapper[4813]: I1202 10:31:25.921063 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b32306d-7ca9-4dff-8119-7502681bc325" containerID="0c9cfaf8472d05c48f41dc914e1fb48477d58abd899eb319a6d6d542698bedad" exitCode=0 Dec 02 10:31:25 crc kubenswrapper[4813]: I1202 10:31:25.921205 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" event={"ID":"2b32306d-7ca9-4dff-8119-7502681bc325","Type":"ContainerDied","Data":"0c9cfaf8472d05c48f41dc914e1fb48477d58abd899eb319a6d6d542698bedad"} Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.180793 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.342154 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-util\") pod \"2b32306d-7ca9-4dff-8119-7502681bc325\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.342275 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-bundle\") pod \"2b32306d-7ca9-4dff-8119-7502681bc325\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.342355 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kpg9\" (UniqueName: \"kubernetes.io/projected/2b32306d-7ca9-4dff-8119-7502681bc325-kube-api-access-2kpg9\") pod \"2b32306d-7ca9-4dff-8119-7502681bc325\" (UID: \"2b32306d-7ca9-4dff-8119-7502681bc325\") " Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.343372 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-bundle" (OuterVolumeSpecName: "bundle") pod "2b32306d-7ca9-4dff-8119-7502681bc325" (UID: "2b32306d-7ca9-4dff-8119-7502681bc325"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.348066 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b32306d-7ca9-4dff-8119-7502681bc325-kube-api-access-2kpg9" (OuterVolumeSpecName: "kube-api-access-2kpg9") pod "2b32306d-7ca9-4dff-8119-7502681bc325" (UID: "2b32306d-7ca9-4dff-8119-7502681bc325"). InnerVolumeSpecName "kube-api-access-2kpg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.356024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-util" (OuterVolumeSpecName: "util") pod "2b32306d-7ca9-4dff-8119-7502681bc325" (UID: "2b32306d-7ca9-4dff-8119-7502681bc325"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.443557 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kpg9\" (UniqueName: \"kubernetes.io/projected/2b32306d-7ca9-4dff-8119-7502681bc325-kube-api-access-2kpg9\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.443594 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-util\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.443605 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b32306d-7ca9-4dff-8119-7502681bc325-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.935659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" event={"ID":"2b32306d-7ca9-4dff-8119-7502681bc325","Type":"ContainerDied","Data":"bb5d41901aa041f566aac9a798960ace3c7d5efd7434338be3b23a6de3f31a81"} Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.935730 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb5d41901aa041f566aac9a798960ace3c7d5efd7434338be3b23a6de3f31a81" Dec 02 10:31:27 crc kubenswrapper[4813]: I1202 10:31:27.935866 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.486727 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd"] Dec 02 10:31:30 crc kubenswrapper[4813]: E1202 10:31:30.487247 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b32306d-7ca9-4dff-8119-7502681bc325" containerName="pull" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.487258 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b32306d-7ca9-4dff-8119-7502681bc325" containerName="pull" Dec 02 10:31:30 crc kubenswrapper[4813]: E1202 10:31:30.487271 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b32306d-7ca9-4dff-8119-7502681bc325" containerName="extract" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.487279 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b32306d-7ca9-4dff-8119-7502681bc325" containerName="extract" Dec 02 10:31:30 crc kubenswrapper[4813]: E1202 10:31:30.487296 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b32306d-7ca9-4dff-8119-7502681bc325" containerName="util" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.487304 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b32306d-7ca9-4dff-8119-7502681bc325" containerName="util" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.487398 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b32306d-7ca9-4dff-8119-7502681bc325" containerName="extract" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.487788 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.490740 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-9mbs9" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.525640 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd"] Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.584205 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdsb2\" (UniqueName: \"kubernetes.io/projected/f6bc9e20-756e-43de-b1f7-e3ffbbcbd219-kube-api-access-rdsb2\") pod \"openstack-operator-controller-operator-77f7c7f9d7-cjmjd\" (UID: \"f6bc9e20-756e-43de-b1f7-e3ffbbcbd219\") " pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.685222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdsb2\" (UniqueName: \"kubernetes.io/projected/f6bc9e20-756e-43de-b1f7-e3ffbbcbd219-kube-api-access-rdsb2\") pod \"openstack-operator-controller-operator-77f7c7f9d7-cjmjd\" (UID: \"f6bc9e20-756e-43de-b1f7-e3ffbbcbd219\") " pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.715547 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdsb2\" (UniqueName: \"kubernetes.io/projected/f6bc9e20-756e-43de-b1f7-e3ffbbcbd219-kube-api-access-rdsb2\") pod \"openstack-operator-controller-operator-77f7c7f9d7-cjmjd\" (UID: \"f6bc9e20-756e-43de-b1f7-e3ffbbcbd219\") " pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" Dec 02 10:31:30 crc kubenswrapper[4813]: I1202 10:31:30.805174 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" Dec 02 10:31:31 crc kubenswrapper[4813]: I1202 10:31:31.002166 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd"] Dec 02 10:31:31 crc kubenswrapper[4813]: W1202 10:31:31.009768 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6bc9e20_756e_43de_b1f7_e3ffbbcbd219.slice/crio-705e6fe501aba7d01178238d5bd109b64a85c14f489a2882f83c74fde2cda08b WatchSource:0}: Error finding container 705e6fe501aba7d01178238d5bd109b64a85c14f489a2882f83c74fde2cda08b: Status 404 returned error can't find the container with id 705e6fe501aba7d01178238d5bd109b64a85c14f489a2882f83c74fde2cda08b Dec 02 10:31:31 crc kubenswrapper[4813]: I1202 10:31:31.971560 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" event={"ID":"f6bc9e20-756e-43de-b1f7-e3ffbbcbd219","Type":"ContainerStarted","Data":"705e6fe501aba7d01178238d5bd109b64a85c14f489a2882f83c74fde2cda08b"} Dec 02 10:31:36 crc kubenswrapper[4813]: I1202 10:31:36.997958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" event={"ID":"f6bc9e20-756e-43de-b1f7-e3ffbbcbd219","Type":"ContainerStarted","Data":"edeb777575cbad295853d9883be06be737d7771e99e343d31840ef0ad8fbdfdc"} Dec 02 10:31:36 crc kubenswrapper[4813]: I1202 10:31:36.998718 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" Dec 02 10:31:37 crc kubenswrapper[4813]: I1202 10:31:37.028695 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" podStartSLOduration=2.172070313 podStartE2EDuration="7.028676073s" podCreationTimestamp="2025-12-02 10:31:30 +0000 UTC" firstStartedPulling="2025-12-02 10:31:31.012169447 +0000 UTC m=+1415.207343749" lastFinishedPulling="2025-12-02 10:31:35.868775207 +0000 UTC m=+1420.063949509" observedRunningTime="2025-12-02 10:31:37.026384407 +0000 UTC m=+1421.221558699" watchObservedRunningTime="2025-12-02 10:31:37.028676073 +0000 UTC m=+1421.223850395" Dec 02 10:31:50 crc kubenswrapper[4813]: I1202 10:31:50.807960 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-77f7c7f9d7-cjmjd" Dec 02 10:32:04 crc kubenswrapper[4813]: I1202 10:32:04.274119 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:32:04 crc kubenswrapper[4813]: I1202 10:32:04.274712 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.880589 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z"] Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.882307 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.885261 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vhxs6" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.900725 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72"] Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.901950 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.904018 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qsc69" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.914728 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z"] Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.921921 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72"] Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.947977 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc"] Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.948946 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.954627 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-q2swf" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.979280 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l"] Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.982029 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.983449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbd28\" (UniqueName: \"kubernetes.io/projected/9de86006-d480-4e91-904d-dea58373d496-kube-api-access-hbd28\") pod \"designate-operator-controller-manager-78b4bc895b-ptlqc\" (UID: \"9de86006-d480-4e91-904d-dea58373d496\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.983510 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dxz\" (UniqueName: \"kubernetes.io/projected/a9b5d3a4-c74a-4dc7-95e7-ce34faf34401-kube-api-access-s4dxz\") pod \"barbican-operator-controller-manager-7d9dfd778-vh67z\" (UID: \"a9b5d3a4-c74a-4dc7-95e7-ce34faf34401\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.983554 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sfc\" (UniqueName: \"kubernetes.io/projected/c78e6c08-10b5-442c-bcc4-96e55238f240-kube-api-access-z5sfc\") pod \"cinder-operator-controller-manager-78c47498c4-pwr72\" (UID: \"c78e6c08-10b5-442c-bcc4-96e55238f240\") " pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" Dec 02 10:32:07 crc kubenswrapper[4813]: I1202 10:32:07.985888 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wmrgz" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.005151 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.033578 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.035940 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.041465 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-b74k8" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.041568 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.085774 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.086471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dxz\" (UniqueName: \"kubernetes.io/projected/a9b5d3a4-c74a-4dc7-95e7-ce34faf34401-kube-api-access-s4dxz\") pod \"barbican-operator-controller-manager-7d9dfd778-vh67z\" (UID: \"a9b5d3a4-c74a-4dc7-95e7-ce34faf34401\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.086591 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sfc\" (UniqueName: \"kubernetes.io/projected/c78e6c08-10b5-442c-bcc4-96e55238f240-kube-api-access-z5sfc\") pod \"cinder-operator-controller-manager-78c47498c4-pwr72\" (UID: \"c78e6c08-10b5-442c-bcc4-96e55238f240\") " pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.086633 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74ql\" (UniqueName: \"kubernetes.io/projected/7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec-kube-api-access-w74ql\") pod \"glance-operator-controller-manager-77987cd8cd-sxk5l\" (UID: \"7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.086764 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbd28\" (UniqueName: \"kubernetes.io/projected/9de86006-d480-4e91-904d-dea58373d496-kube-api-access-hbd28\") pod \"designate-operator-controller-manager-78b4bc895b-ptlqc\" (UID: \"9de86006-d480-4e91-904d-dea58373d496\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.086822 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmp6q\" (UniqueName: \"kubernetes.io/projected/98f2dfc1-669a-430c-a089-859de7ca1688-kube-api-access-vmp6q\") pod \"heat-operator-controller-manager-5f64f6f8bb-fd42j\" (UID: \"98f2dfc1-669a-430c-a089-859de7ca1688\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.146432 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dxz\" (UniqueName: \"kubernetes.io/projected/a9b5d3a4-c74a-4dc7-95e7-ce34faf34401-kube-api-access-s4dxz\") pod \"barbican-operator-controller-manager-7d9dfd778-vh67z\" (UID: \"a9b5d3a4-c74a-4dc7-95e7-ce34faf34401\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.159689 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sfc\" (UniqueName: \"kubernetes.io/projected/c78e6c08-10b5-442c-bcc4-96e55238f240-kube-api-access-z5sfc\") pod \"cinder-operator-controller-manager-78c47498c4-pwr72\" (UID: \"c78e6c08-10b5-442c-bcc4-96e55238f240\") " pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.161690 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbd28\" (UniqueName: \"kubernetes.io/projected/9de86006-d480-4e91-904d-dea58373d496-kube-api-access-hbd28\") pod \"designate-operator-controller-manager-78b4bc895b-ptlqc\" (UID: \"9de86006-d480-4e91-904d-dea58373d496\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.176149 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.178981 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.216403 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.216474 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.216985 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w74ql\" (UniqueName: \"kubernetes.io/projected/7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec-kube-api-access-w74ql\") pod \"glance-operator-controller-manager-77987cd8cd-sxk5l\" (UID: \"7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.217061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pglm7\" (UniqueName: \"kubernetes.io/projected/2ba57cac-e437-4de6-a3fa-563d41cd0404-kube-api-access-pglm7\") pod \"horizon-operator-controller-manager-68c6d99b8f-525b9\" (UID: \"2ba57cac-e437-4de6-a3fa-563d41cd0404\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.217169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmp6q\" (UniqueName: \"kubernetes.io/projected/98f2dfc1-669a-430c-a089-859de7ca1688-kube-api-access-vmp6q\") pod \"heat-operator-controller-manager-5f64f6f8bb-fd42j\" (UID: \"98f2dfc1-669a-430c-a089-859de7ca1688\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.219307 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-kbgxw" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.228628 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.275419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.279879 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmp6q\" (UniqueName: \"kubernetes.io/projected/98f2dfc1-669a-430c-a089-859de7ca1688-kube-api-access-vmp6q\") pod \"heat-operator-controller-manager-5f64f6f8bb-fd42j\" (UID: \"98f2dfc1-669a-430c-a089-859de7ca1688\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.286300 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.287290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.292908 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-2l9k6" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.299819 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w74ql\" (UniqueName: \"kubernetes.io/projected/7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec-kube-api-access-w74ql\") pod \"glance-operator-controller-manager-77987cd8cd-sxk5l\" (UID: \"7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.305932 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.307621 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.308728 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.322154 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.322853 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-q2zkw" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.324751 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qbmj\" (UniqueName: \"kubernetes.io/projected/4da17b88-c060-41ed-ab38-90dc8dd0383e-kube-api-access-9qbmj\") pod \"ironic-operator-controller-manager-6c548fd776-5n7pc\" (UID: \"4da17b88-c060-41ed-ab38-90dc8dd0383e\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.324823 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pglm7\" (UniqueName: \"kubernetes.io/projected/2ba57cac-e437-4de6-a3fa-563d41cd0404-kube-api-access-pglm7\") pod \"horizon-operator-controller-manager-68c6d99b8f-525b9\" (UID: \"2ba57cac-e437-4de6-a3fa-563d41cd0404\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.342242 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.343309 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.352870 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-77ls9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.359706 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.363419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.397282 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.398734 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.401756 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-l8rlw" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.406857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pglm7\" (UniqueName: \"kubernetes.io/projected/2ba57cac-e437-4de6-a3fa-563d41cd0404-kube-api-access-pglm7\") pod \"horizon-operator-controller-manager-68c6d99b8f-525b9\" (UID: \"2ba57cac-e437-4de6-a3fa-563d41cd0404\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.420232 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.426809 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.426890 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qbmj\" (UniqueName: \"kubernetes.io/projected/4da17b88-c060-41ed-ab38-90dc8dd0383e-kube-api-access-9qbmj\") pod \"ironic-operator-controller-manager-6c548fd776-5n7pc\" (UID: \"4da17b88-c060-41ed-ab38-90dc8dd0383e\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.426949 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgg5\" (UniqueName: \"kubernetes.io/projected/baa2abea-8891-4e33-b453-e34dc8e15df7-kube-api-access-dzgg5\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.426992 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vknml\" (UniqueName: \"kubernetes.io/projected/2f7373b2-cc78-4f73-9ed5-23d0c3144867-kube-api-access-vknml\") pod \"keystone-operator-controller-manager-7765d96ddf-dprfp\" (UID: \"2f7373b2-cc78-4f73-9ed5-23d0c3144867\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.457289 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.460924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qbmj\" (UniqueName: \"kubernetes.io/projected/4da17b88-c060-41ed-ab38-90dc8dd0383e-kube-api-access-9qbmj\") pod \"ironic-operator-controller-manager-6c548fd776-5n7pc\" (UID: \"4da17b88-c060-41ed-ab38-90dc8dd0383e\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.497817 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.528751 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgg5\" (UniqueName: \"kubernetes.io/projected/baa2abea-8891-4e33-b453-e34dc8e15df7-kube-api-access-dzgg5\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.528821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vknml\" (UniqueName: \"kubernetes.io/projected/2f7373b2-cc78-4f73-9ed5-23d0c3144867-kube-api-access-vknml\") pod \"keystone-operator-controller-manager-7765d96ddf-dprfp\" (UID: \"2f7373b2-cc78-4f73-9ed5-23d0c3144867\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.528872 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2b9j\" (UniqueName: \"kubernetes.io/projected/95242ae1-57e8-436f-9971-66e273b0d75c-kube-api-access-f2b9j\") pod \"manila-operator-controller-manager-7c79b5df47-8sknj\" (UID: \"95242ae1-57e8-436f-9971-66e273b0d75c\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.528931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:08 crc kubenswrapper[4813]: E1202 10:32:08.529081 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:08 crc kubenswrapper[4813]: E1202 10:32:08.529144 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert podName:baa2abea-8891-4e33-b453-e34dc8e15df7 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:09.029123484 +0000 UTC m=+1453.224297786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert") pod "infra-operator-controller-manager-57548d458d-2sk2z" (UID: "baa2abea-8891-4e33-b453-e34dc8e15df7") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.535137 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.536481 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.543339 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nd2b2"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.544545 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wr9k4" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.544762 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.570302 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.571554 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.571870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vknml\" (UniqueName: \"kubernetes.io/projected/2f7373b2-cc78-4f73-9ed5-23d0c3144867-kube-api-access-vknml\") pod \"keystone-operator-controller-manager-7765d96ddf-dprfp\" (UID: \"2f7373b2-cc78-4f73-9ed5-23d0c3144867\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.574431 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bxgx7" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.576367 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.581771 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgg5\" (UniqueName: \"kubernetes.io/projected/baa2abea-8891-4e33-b453-e34dc8e15df7-kube-api-access-dzgg5\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.581934 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd2b2"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.589485 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.596850 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.597818 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.605689 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lt9k9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.627097 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.630172 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8p5x\" (UniqueName: \"kubernetes.io/projected/da18c237-cd3d-4116-9373-989eaf92e7cd-kube-api-access-t8p5x\") pod \"nova-operator-controller-manager-697bc559fc-grz2d\" (UID: \"da18c237-cd3d-4116-9373-989eaf92e7cd\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.630207 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-utilities\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.630254 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2b9j\" (UniqueName: \"kubernetes.io/projected/95242ae1-57e8-436f-9971-66e273b0d75c-kube-api-access-f2b9j\") pod \"manila-operator-controller-manager-7c79b5df47-8sknj\" (UID: \"95242ae1-57e8-436f-9971-66e273b0d75c\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.630283 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpfjk\" (UniqueName: \"kubernetes.io/projected/c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4-kube-api-access-bpfjk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-z2z7m\" (UID: \"c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.630300 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpcz4\" (UniqueName: \"kubernetes.io/projected/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-kube-api-access-vpcz4\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.630321 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fmjr\" (UniqueName: \"kubernetes.io/projected/796ef4ca-26ba-44f0-b23a-c4fd808c5981-kube-api-access-6fmjr\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-c5x5p\" (UID: \"796ef4ca-26ba-44f0-b23a-c4fd808c5981\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.630347 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-catalog-content\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.636018 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.636158 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.638718 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-f2ttv" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.651934 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.654007 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.680348 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.681716 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.688511 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.688801 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jrh88" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.698107 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.706661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2b9j\" (UniqueName: \"kubernetes.io/projected/95242ae1-57e8-436f-9971-66e273b0d75c-kube-api-access-f2b9j\") pod \"manila-operator-controller-manager-7c79b5df47-8sknj\" (UID: \"95242ae1-57e8-436f-9971-66e273b0d75c\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.726694 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.727995 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731166 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-cxfbk" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8p5x\" (UniqueName: \"kubernetes.io/projected/da18c237-cd3d-4116-9373-989eaf92e7cd-kube-api-access-t8p5x\") pod \"nova-operator-controller-manager-697bc559fc-grz2d\" (UID: \"da18c237-cd3d-4116-9373-989eaf92e7cd\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-utilities\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731597 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kpvw\" (UniqueName: \"kubernetes.io/projected/8e626c15-e204-4729-8c0f-95b7b101ec43-kube-api-access-4kpvw\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731661 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731694 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpfjk\" (UniqueName: \"kubernetes.io/projected/c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4-kube-api-access-bpfjk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-z2z7m\" (UID: \"c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpcz4\" (UniqueName: \"kubernetes.io/projected/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-kube-api-access-vpcz4\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731744 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmjr\" (UniqueName: \"kubernetes.io/projected/796ef4ca-26ba-44f0-b23a-c4fd808c5981-kube-api-access-6fmjr\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-c5x5p\" (UID: \"796ef4ca-26ba-44f0-b23a-c4fd808c5981\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731780 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5lc8\" (UniqueName: \"kubernetes.io/projected/2b41c1b0-929f-4289-b50d-5567c79a26d8-kube-api-access-f5lc8\") pod \"octavia-operator-controller-manager-998648c74-xbnzh\" (UID: \"2b41c1b0-929f-4289-b50d-5567c79a26d8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.731812 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-catalog-content\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.732341 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-utilities\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.734713 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-catalog-content\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.740737 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.742601 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.748493 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.750000 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.753698 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b8xlh" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.754433 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.760937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpfjk\" (UniqueName: \"kubernetes.io/projected/c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4-kube-api-access-bpfjk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-z2z7m\" (UID: \"c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.761486 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.767336 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8p5x\" (UniqueName: \"kubernetes.io/projected/da18c237-cd3d-4116-9373-989eaf92e7cd-kube-api-access-t8p5x\") pod \"nova-operator-controller-manager-697bc559fc-grz2d\" (UID: \"da18c237-cd3d-4116-9373-989eaf92e7cd\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.782229 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wxld7"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.786518 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fmjr\" (UniqueName: \"kubernetes.io/projected/796ef4ca-26ba-44f0-b23a-c4fd808c5981-kube-api-access-6fmjr\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-c5x5p\" (UID: \"796ef4ca-26ba-44f0-b23a-c4fd808c5981\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.796291 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.796607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpcz4\" (UniqueName: \"kubernetes.io/projected/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-kube-api-access-vpcz4\") pod \"redhat-marketplace-nd2b2\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.811926 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-892cx" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.821893 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.835886 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkq8k\" (UniqueName: \"kubernetes.io/projected/b2092fa1-ae34-44b4-b89f-d2c1407b911a-kube-api-access-pkq8k\") pod \"placement-operator-controller-manager-78f8948974-wxld7\" (UID: \"b2092fa1-ae34-44b4-b89f-d2c1407b911a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.836042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kpvw\" (UniqueName: \"kubernetes.io/projected/8e626c15-e204-4729-8c0f-95b7b101ec43-kube-api-access-4kpvw\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.836486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.836528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g889d\" (UniqueName: \"kubernetes.io/projected/a22bf838-4122-4704-b8a7-d590e3ba5b65-kube-api-access-g889d\") pod \"ovn-operator-controller-manager-b6456fdb6-wj22v\" (UID: \"a22bf838-4122-4704-b8a7-d590e3ba5b65\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.836582 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5lc8\" (UniqueName: \"kubernetes.io/projected/2b41c1b0-929f-4289-b50d-5567c79a26d8-kube-api-access-f5lc8\") pod \"octavia-operator-controller-manager-998648c74-xbnzh\" (UID: \"2b41c1b0-929f-4289-b50d-5567c79a26d8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.836629 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr27j\" (UniqueName: \"kubernetes.io/projected/7543bebd-caf8-49db-99ce-fed3b5ac812a-kube-api-access-cr27j\") pod \"swift-operator-controller-manager-5f8c65bbfc-sl4ml\" (UID: \"7543bebd-caf8-49db-99ce-fed3b5ac812a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" Dec 02 10:32:08 crc kubenswrapper[4813]: E1202 10:32:08.836857 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:08 crc kubenswrapper[4813]: E1202 10:32:08.836991 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert podName:8e626c15-e204-4729-8c0f-95b7b101ec43 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:09.336971347 +0000 UTC m=+1453.532145649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" (UID: "8e626c15-e204-4729-8c0f-95b7b101ec43") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.842382 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.844067 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.847168 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m22xw" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.848824 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.851040 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.856197 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.856421 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4jnpt" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.857017 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kpvw\" (UniqueName: \"kubernetes.io/projected/8e626c15-e204-4729-8c0f-95b7b101ec43-kube-api-access-4kpvw\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.857846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5lc8\" (UniqueName: \"kubernetes.io/projected/2b41c1b0-929f-4289-b50d-5567c79a26d8-kube-api-access-f5lc8\") pod \"octavia-operator-controller-manager-998648c74-xbnzh\" (UID: \"2b41c1b0-929f-4289-b50d-5567c79a26d8\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.868933 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wxld7"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.872992 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.880437 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.881880 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.882759 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.888688 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-4w2sg" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.896979 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.900709 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.906743 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb"] Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.920012 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.938405 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkq8k\" (UniqueName: \"kubernetes.io/projected/b2092fa1-ae34-44b4-b89f-d2c1407b911a-kube-api-access-pkq8k\") pod \"placement-operator-controller-manager-78f8948974-wxld7\" (UID: \"b2092fa1-ae34-44b4-b89f-d2c1407b911a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.938466 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8pjn\" (UniqueName: \"kubernetes.io/projected/aff40ee1-2e46-4923-8138-09046b9568dd-kube-api-access-s8pjn\") pod \"telemetry-operator-controller-manager-76cc84c6bb-nhfvs\" (UID: \"aff40ee1-2e46-4923-8138-09046b9568dd\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.938512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct495\" (UniqueName: \"kubernetes.io/projected/f1a3ada5-a084-4500-8c1b-a9e6e3008786-kube-api-access-ct495\") pod \"watcher-operator-controller-manager-769dc69bc-bflrb\" (UID: \"f1a3ada5-a084-4500-8c1b-a9e6e3008786\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.938552 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g889d\" (UniqueName: \"kubernetes.io/projected/a22bf838-4122-4704-b8a7-d590e3ba5b65-kube-api-access-g889d\") pod \"ovn-operator-controller-manager-b6456fdb6-wj22v\" (UID: \"a22bf838-4122-4704-b8a7-d590e3ba5b65\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.938608 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwl5h\" (UniqueName: \"kubernetes.io/projected/3bed8e3c-64ca-47e0-80b2-ec2f40473db9-kube-api-access-dwl5h\") pod \"test-operator-controller-manager-5854674fcc-l6sr9\" (UID: \"3bed8e3c-64ca-47e0-80b2-ec2f40473db9\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.938627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr27j\" (UniqueName: \"kubernetes.io/projected/7543bebd-caf8-49db-99ce-fed3b5ac812a-kube-api-access-cr27j\") pod \"swift-operator-controller-manager-5f8c65bbfc-sl4ml\" (UID: \"7543bebd-caf8-49db-99ce-fed3b5ac812a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.959964 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.971391 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkq8k\" (UniqueName: \"kubernetes.io/projected/b2092fa1-ae34-44b4-b89f-d2c1407b911a-kube-api-access-pkq8k\") pod \"placement-operator-controller-manager-78f8948974-wxld7\" (UID: \"b2092fa1-ae34-44b4-b89f-d2c1407b911a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" Dec 02 10:32:08 crc kubenswrapper[4813]: I1202 10:32:08.992959 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g889d\" (UniqueName: \"kubernetes.io/projected/a22bf838-4122-4704-b8a7-d590e3ba5b65-kube-api-access-g889d\") pod \"ovn-operator-controller-manager-b6456fdb6-wj22v\" (UID: \"a22bf838-4122-4704-b8a7-d590e3ba5b65\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.000650 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr27j\" (UniqueName: \"kubernetes.io/projected/7543bebd-caf8-49db-99ce-fed3b5ac812a-kube-api-access-cr27j\") pod \"swift-operator-controller-manager-5f8c65bbfc-sl4ml\" (UID: \"7543bebd-caf8-49db-99ce-fed3b5ac812a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.001540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.009023 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd"] Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.012405 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.023514 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.023837 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zg6mg" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.023994 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.036684 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd"] Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.039780 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxhw\" (UniqueName: \"kubernetes.io/projected/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-kube-api-access-lmxhw\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.040534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwl5h\" (UniqueName: \"kubernetes.io/projected/3bed8e3c-64ca-47e0-80b2-ec2f40473db9-kube-api-access-dwl5h\") pod \"test-operator-controller-manager-5854674fcc-l6sr9\" (UID: \"3bed8e3c-64ca-47e0-80b2-ec2f40473db9\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.040739 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.040821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.040978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8pjn\" (UniqueName: \"kubernetes.io/projected/aff40ee1-2e46-4923-8138-09046b9568dd-kube-api-access-s8pjn\") pod \"telemetry-operator-controller-manager-76cc84c6bb-nhfvs\" (UID: \"aff40ee1-2e46-4923-8138-09046b9568dd\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.042718 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.042763 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.043326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct495\" (UniqueName: \"kubernetes.io/projected/f1a3ada5-a084-4500-8c1b-a9e6e3008786-kube-api-access-ct495\") pod \"watcher-operator-controller-manager-769dc69bc-bflrb\" (UID: \"f1a3ada5-a084-4500-8c1b-a9e6e3008786\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.043362 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert podName:baa2abea-8891-4e33-b453-e34dc8e15df7 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:10.043339888 +0000 UTC m=+1454.238514260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert") pod "infra-operator-controller-manager-57548d458d-2sk2z" (UID: "baa2abea-8891-4e33-b453-e34dc8e15df7") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.057203 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.070750 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwl5h\" (UniqueName: \"kubernetes.io/projected/3bed8e3c-64ca-47e0-80b2-ec2f40473db9-kube-api-access-dwl5h\") pod \"test-operator-controller-manager-5854674fcc-l6sr9\" (UID: \"3bed8e3c-64ca-47e0-80b2-ec2f40473db9\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.084016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8pjn\" (UniqueName: \"kubernetes.io/projected/aff40ee1-2e46-4923-8138-09046b9568dd-kube-api-access-s8pjn\") pod \"telemetry-operator-controller-manager-76cc84c6bb-nhfvs\" (UID: \"aff40ee1-2e46-4923-8138-09046b9568dd\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.087172 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct495\" (UniqueName: \"kubernetes.io/projected/f1a3ada5-a084-4500-8c1b-a9e6e3008786-kube-api-access-ct495\") pod \"watcher-operator-controller-manager-769dc69bc-bflrb\" (UID: \"f1a3ada5-a084-4500-8c1b-a9e6e3008786\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.087060 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp"] Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.088105 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.094773 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hz29g" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.098896 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.144207 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp"] Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.145131 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxhw\" (UniqueName: \"kubernetes.io/projected/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-kube-api-access-lmxhw\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.145173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.145220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dp9s\" (UniqueName: \"kubernetes.io/projected/afe1c5ed-adc9-4200-b1c0-8938e759daed-kube-api-access-9dp9s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lwjwp\" (UID: \"afe1c5ed-adc9-4200-b1c0-8938e759daed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.145272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.145383 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.145429 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:09.645413157 +0000 UTC m=+1453.840587459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "webhook-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.146190 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.146217 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:09.646208349 +0000 UTC m=+1453.841382651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "metrics-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.185357 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxhw\" (UniqueName: \"kubernetes.io/projected/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-kube-api-access-lmxhw\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.234256 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z"] Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.246276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dp9s\" (UniqueName: \"kubernetes.io/projected/afe1c5ed-adc9-4200-b1c0-8938e759daed-kube-api-access-9dp9s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lwjwp\" (UID: \"afe1c5ed-adc9-4200-b1c0-8938e759daed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.254200 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc"] Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.270648 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dp9s\" (UniqueName: \"kubernetes.io/projected/afe1c5ed-adc9-4200-b1c0-8938e759daed-kube-api-access-9dp9s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lwjwp\" (UID: \"afe1c5ed-adc9-4200-b1c0-8938e759daed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" Dec 02 10:32:09 crc kubenswrapper[4813]: W1202 10:32:09.273048 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b5d3a4_c74a_4dc7_95e7_ce34faf34401.slice/crio-9220d5635551fa404bc402384e67646052321898283bada27104480df09752b3 WatchSource:0}: Error finding container 9220d5635551fa404bc402384e67646052321898283bada27104480df09752b3: Status 404 returned error can't find the container with id 9220d5635551fa404bc402384e67646052321898283bada27104480df09752b3 Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.313125 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.347933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.348274 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.348431 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert podName:8e626c15-e204-4729-8c0f-95b7b101ec43 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:10.348359002 +0000 UTC m=+1454.543533314 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" (UID: "8e626c15-e204-4729-8c0f-95b7b101ec43") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.473776 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.628299 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.654032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.654180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.654352 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.654422 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:10.654399084 +0000 UTC m=+1454.849573396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "metrics-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.655059 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: E1202 10:32:09.655168 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:10.655146865 +0000 UTC m=+1454.850321207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "webhook-server-cert" not found Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.662990 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.861564 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j"] Dec 02 10:32:09 crc kubenswrapper[4813]: W1202 10:32:09.870437 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc78e6c08_10b5_442c_bcc4_96e55238f240.slice/crio-68e9e655cd8c25b8174378403386b0440d00d423e8f74edd6e790c0ae86106d5 WatchSource:0}: Error finding container 68e9e655cd8c25b8174378403386b0440d00d423e8f74edd6e790c0ae86106d5: Status 404 returned error can't find the container with id 68e9e655cd8c25b8174378403386b0440d00d423e8f74edd6e790c0ae86106d5 Dec 02 10:32:09 crc kubenswrapper[4813]: I1202 10:32:09.873306 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.058759 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.058980 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.059043 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert podName:baa2abea-8891-4e33-b453-e34dc8e15df7 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:12.059022108 +0000 UTC m=+1456.254196410 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert") pod "infra-operator-controller-manager-57548d458d-2sk2z" (UID: "baa2abea-8891-4e33-b453-e34dc8e15df7") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.293471 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" event={"ID":"c78e6c08-10b5-442c-bcc4-96e55238f240","Type":"ContainerStarted","Data":"68e9e655cd8c25b8174378403386b0440d00d423e8f74edd6e790c0ae86106d5"} Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.295689 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" event={"ID":"a9b5d3a4-c74a-4dc7-95e7-ce34faf34401","Type":"ContainerStarted","Data":"9220d5635551fa404bc402384e67646052321898283bada27104480df09752b3"} Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.297879 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" event={"ID":"9de86006-d480-4e91-904d-dea58373d496","Type":"ContainerStarted","Data":"5f271f8697c86608ccd705179ec48388c47302b595dc0f09ba7981b1bf00f59e"} Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.299591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" event={"ID":"98f2dfc1-669a-430c-a089-859de7ca1688","Type":"ContainerStarted","Data":"8bff49bffe1d3764d2bc3d1c72b8f4d87a527d79563d8885370f18750af76148"} Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.305547 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.318272 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.334949 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.347287 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.363398 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.363579 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.363635 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert podName:8e626c15-e204-4729-8c0f-95b7b101ec43 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:12.363617729 +0000 UTC m=+1456.558792031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" (UID: "8e626c15-e204-4729-8c0f-95b7b101ec43") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.380239 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd2b2"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.397786 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.408299 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.415892 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.424222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m"] Dec 02 10:32:10 crc kubenswrapper[4813]: W1202 10:32:10.432411 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4aed6a6_6a6a_424a_bacb_4a5fb1b5ada4.slice/crio-2e11bd7b55b8f9b60e5687a640369c6dfa1d31b7dd23e96698d62b83cc22fd91 WatchSource:0}: Error finding container 2e11bd7b55b8f9b60e5687a640369c6dfa1d31b7dd23e96698d62b83cc22fd91: Status 404 returned error can't find the container with id 2e11bd7b55b8f9b60e5687a640369c6dfa1d31b7dd23e96698d62b83cc22fd91 Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.434485 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj"] Dec 02 10:32:10 crc kubenswrapper[4813]: W1202 10:32:10.439039 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95242ae1_57e8_436f_9971_66e273b0d75c.slice/crio-6aee0c043642a60c8e8a1f7f107ccff589c93382b1145b7eb9f9d72a8bcd5512 WatchSource:0}: Error finding container 6aee0c043642a60c8e8a1f7f107ccff589c93382b1145b7eb9f9d72a8bcd5512: Status 404 returned error can't find the container with id 6aee0c043642a60c8e8a1f7f107ccff589c93382b1145b7eb9f9d72a8bcd5512 Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.443823 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml"] Dec 02 10:32:10 crc kubenswrapper[4813]: W1202 10:32:10.445377 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod796ef4ca_26ba_44f0_b23a_c4fd808c5981.slice/crio-bbf2655aac21c7d15a783c353aa6ffc361a6d164ccf8f369c088a74e5d66abee WatchSource:0}: Error finding container bbf2655aac21c7d15a783c353aa6ffc361a6d164ccf8f369c088a74e5d66abee: Status 404 returned error can't find the container with id bbf2655aac21c7d15a783c353aa6ffc361a6d164ccf8f369c088a74e5d66abee Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.453391 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p"] Dec 02 10:32:10 crc kubenswrapper[4813]: W1202 10:32:10.457671 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2092fa1_ae34_44b4_b89f_d2c1407b911a.slice/crio-a5d5db1048a2a2943c37dd417457fc932ec0588fbc152e74d449e4c978060f19 WatchSource:0}: Error finding container a5d5db1048a2a2943c37dd417457fc932ec0588fbc152e74d449e4c978060f19: Status 404 returned error can't find the container with id a5d5db1048a2a2943c37dd417457fc932ec0588fbc152e74d449e4c978060f19 Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.457832 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g889d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-wj22v_openstack-operators(a22bf838-4122-4704-b8a7-d590e3ba5b65): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.458695 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wxld7"] Dec 02 10:32:10 crc kubenswrapper[4813]: W1202 10:32:10.458839 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff40ee1_2e46_4923_8138_09046b9568dd.slice/crio-cea058ae2b113ba944fe85d40f688233be23f796e9249e75b04aa5ab8f9e0995 WatchSource:0}: Error finding container cea058ae2b113ba944fe85d40f688233be23f796e9249e75b04aa5ab8f9e0995: Status 404 returned error can't find the container with id cea058ae2b113ba944fe85d40f688233be23f796e9249e75b04aa5ab8f9e0995 Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.459601 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g889d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-wj22v_openstack-operators(a22bf838-4122-4704-b8a7-d590e3ba5b65): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.460800 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" podUID="a22bf838-4122-4704-b8a7-d590e3ba5b65" Dec 02 10:32:10 crc kubenswrapper[4813]: W1202 10:32:10.461743 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a9b96d0_9a4f_4e67_9b12_94e83e89f4ec.slice/crio-83e814423e1f8b48c82ccb0c2a82fe3c8ada6070197d55ab564f6a3492d06a07 WatchSource:0}: Error finding container 83e814423e1f8b48c82ccb0c2a82fe3c8ada6070197d55ab564f6a3492d06a07: Status 404 returned error can't find the container with id 83e814423e1f8b48c82ccb0c2a82fe3c8ada6070197d55ab564f6a3492d06a07 Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.463449 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs"] Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.463501 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8pjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-nhfvs_openstack-operators(aff40ee1-2e46-4923-8138-09046b9568dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.463591 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkq8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-wxld7_openstack-operators(b2092fa1-ae34-44b4-b89f-d2c1407b911a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.465247 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w74ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-sxk5l_openstack-operators(7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.468048 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8pjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-nhfvs_openstack-operators(aff40ee1-2e46-4923-8138-09046b9568dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.468189 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkq8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-wxld7_openstack-operators(b2092fa1-ae34-44b4-b89f-d2c1407b911a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.468244 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w74ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-sxk5l_openstack-operators(7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.469401 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" podUID="7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.469504 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" podUID="aff40ee1-2e46-4923-8138-09046b9568dd" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.469399 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" podUID="b2092fa1-ae34-44b4-b89f-d2c1407b911a" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.470592 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9dp9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lwjwp_openstack-operators(afe1c5ed-adc9-4200-b1c0-8938e759daed): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.471154 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb"] Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.473792 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" podUID="afe1c5ed-adc9-4200-b1c0-8938e759daed" Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.484795 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9"] Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.484955 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp"] Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.488491 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ct495,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-bflrb_openstack-operators(f1a3ada5-a084-4500-8c1b-a9e6e3008786): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.491044 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ct495,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-bflrb_openstack-operators(f1a3ada5-a084-4500-8c1b-a9e6e3008786): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.492312 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" podUID="f1a3ada5-a084-4500-8c1b-a9e6e3008786" Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.667403 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:10 crc kubenswrapper[4813]: I1202 10:32:10.667970 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.667565 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.668309 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:12.668291403 +0000 UTC m=+1456.863465705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "webhook-server-cert" not found Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.668222 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:32:10 crc kubenswrapper[4813]: E1202 10:32:10.668420 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:12.668386426 +0000 UTC m=+1456.863560728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "metrics-server-cert" not found Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.310235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" event={"ID":"aff40ee1-2e46-4923-8138-09046b9568dd","Type":"ContainerStarted","Data":"cea058ae2b113ba944fe85d40f688233be23f796e9249e75b04aa5ab8f9e0995"} Dec 02 10:32:11 crc kubenswrapper[4813]: E1202 10:32:11.319273 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" podUID="aff40ee1-2e46-4923-8138-09046b9568dd" Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.319573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" event={"ID":"da18c237-cd3d-4116-9373-989eaf92e7cd","Type":"ContainerStarted","Data":"eb06d860051b45bfb101e4f75a96626b9e6ee4e6c0ffbd17b47f3cedc3d86ace"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.339754 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" event={"ID":"a22bf838-4122-4704-b8a7-d590e3ba5b65","Type":"ContainerStarted","Data":"81a697c24c041a2670e65802dedda4574b939c4207298da0adfeba2709143b50"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.344883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" event={"ID":"95242ae1-57e8-436f-9971-66e273b0d75c","Type":"ContainerStarted","Data":"6aee0c043642a60c8e8a1f7f107ccff589c93382b1145b7eb9f9d72a8bcd5512"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.346402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" event={"ID":"7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec","Type":"ContainerStarted","Data":"83e814423e1f8b48c82ccb0c2a82fe3c8ada6070197d55ab564f6a3492d06a07"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.348775 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" event={"ID":"2ba57cac-e437-4de6-a3fa-563d41cd0404","Type":"ContainerStarted","Data":"82e241eb84618d7ac61bb597ab1e09c5e3a63a335e9e01bebab44a39a4967dbe"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.350776 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" event={"ID":"796ef4ca-26ba-44f0-b23a-c4fd808c5981","Type":"ContainerStarted","Data":"bbf2655aac21c7d15a783c353aa6ffc361a6d164ccf8f369c088a74e5d66abee"} Dec 02 10:32:11 crc kubenswrapper[4813]: E1202 10:32:11.351721 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" podUID="a22bf838-4122-4704-b8a7-d590e3ba5b65" Dec 02 10:32:11 crc kubenswrapper[4813]: E1202 10:32:11.352450 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" podUID="7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec" Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.352496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" event={"ID":"7543bebd-caf8-49db-99ce-fed3b5ac812a","Type":"ContainerStarted","Data":"d65bc0de051796bd23a9255d5c702ce33174efdb810002759d11bfee871c9e55"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.354769 4813 generic.go:334] "Generic (PLEG): container finished" podID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerID="0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158" exitCode=0 Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.355017 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd2b2" event={"ID":"a29b0d41-84ff-4b6a-9ee4-529e207c6a09","Type":"ContainerDied","Data":"0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.355056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd2b2" event={"ID":"a29b0d41-84ff-4b6a-9ee4-529e207c6a09","Type":"ContainerStarted","Data":"54eac5a2ef7775eece9ff87d97094d7f19f27de2a4ebfedc20a2e5b76f5c786f"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.357384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" event={"ID":"f1a3ada5-a084-4500-8c1b-a9e6e3008786","Type":"ContainerStarted","Data":"52519895f69e1fb38bba75750cfd9e58224c0edd0a0b8d9d98f5c7c97371b592"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.359524 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" event={"ID":"4da17b88-c060-41ed-ab38-90dc8dd0383e","Type":"ContainerStarted","Data":"f29b526fa8b579d48d85d7fcd4a9e000d3cea4eb07b03f96009543397c72a795"} Dec 02 10:32:11 crc kubenswrapper[4813]: E1202 10:32:11.359520 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" podUID="f1a3ada5-a084-4500-8c1b-a9e6e3008786" Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.362613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" event={"ID":"b2092fa1-ae34-44b4-b89f-d2c1407b911a","Type":"ContainerStarted","Data":"a5d5db1048a2a2943c37dd417457fc932ec0588fbc152e74d449e4c978060f19"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.365634 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" event={"ID":"2b41c1b0-929f-4289-b50d-5567c79a26d8","Type":"ContainerStarted","Data":"ad7230c720f8a58ebea62f847ccae1fa91876cc8be2194c1569e60fbd7e2206d"} Dec 02 10:32:11 crc kubenswrapper[4813]: E1202 10:32:11.368483 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" podUID="b2092fa1-ae34-44b4-b89f-d2c1407b911a" Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.376289 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" event={"ID":"2f7373b2-cc78-4f73-9ed5-23d0c3144867","Type":"ContainerStarted","Data":"86902c23f9c31cd1c22f5d1e8f07fe15c04d76e757e27158c72746cd8d94bae6"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.387143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" event={"ID":"c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4","Type":"ContainerStarted","Data":"2e11bd7b55b8f9b60e5687a640369c6dfa1d31b7dd23e96698d62b83cc22fd91"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.391859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" event={"ID":"3bed8e3c-64ca-47e0-80b2-ec2f40473db9","Type":"ContainerStarted","Data":"1fc3bd0729838a9e4cb1c925e9a536ab944eca7b9a7107cc5e5a607d0f9d5162"} Dec 02 10:32:11 crc kubenswrapper[4813]: I1202 10:32:11.392814 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" event={"ID":"afe1c5ed-adc9-4200-b1c0-8938e759daed","Type":"ContainerStarted","Data":"72dda41897aa0ed5d02c9651902e54edfbac088c38c72101926e558733743e76"} Dec 02 10:32:11 crc kubenswrapper[4813]: E1202 10:32:11.394188 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" podUID="afe1c5ed-adc9-4200-b1c0-8938e759daed" Dec 02 10:32:12 crc kubenswrapper[4813]: I1202 10:32:12.093665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.093804 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.094155 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert podName:baa2abea-8891-4e33-b453-e34dc8e15df7 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:16.094137443 +0000 UTC m=+1460.289311745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert") pod "infra-operator-controller-manager-57548d458d-2sk2z" (UID: "baa2abea-8891-4e33-b453-e34dc8e15df7") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:12 crc kubenswrapper[4813]: I1202 10:32:12.399311 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.399510 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.399627 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert podName:8e626c15-e204-4729-8c0f-95b7b101ec43 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:16.399581499 +0000 UTC m=+1460.594755801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" (UID: "8e626c15-e204-4729-8c0f-95b7b101ec43") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.408675 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" podUID="afe1c5ed-adc9-4200-b1c0-8938e759daed" Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.410006 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" podUID="7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec" Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.410480 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" podUID="aff40ee1-2e46-4923-8138-09046b9568dd" Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.411017 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" podUID="b2092fa1-ae34-44b4-b89f-d2c1407b911a" Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.411656 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" podUID="f1a3ada5-a084-4500-8c1b-a9e6e3008786" Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.411827 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" podUID="a22bf838-4122-4704-b8a7-d590e3ba5b65" Dec 02 10:32:12 crc kubenswrapper[4813]: I1202 10:32:12.703627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:12 crc kubenswrapper[4813]: I1202 10:32:12.703752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.703961 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.704013 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.704027 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:16.704008266 +0000 UTC m=+1460.899182568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "webhook-server-cert" not found Dec 02 10:32:12 crc kubenswrapper[4813]: E1202 10:32:12.704111 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:16.704092908 +0000 UTC m=+1460.899267210 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "metrics-server-cert" not found Dec 02 10:32:16 crc kubenswrapper[4813]: I1202 10:32:16.166230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:16 crc kubenswrapper[4813]: E1202 10:32:16.166395 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:16 crc kubenswrapper[4813]: E1202 10:32:16.167037 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert podName:baa2abea-8891-4e33-b453-e34dc8e15df7 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:24.167015057 +0000 UTC m=+1468.362189359 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert") pod "infra-operator-controller-manager-57548d458d-2sk2z" (UID: "baa2abea-8891-4e33-b453-e34dc8e15df7") : secret "infra-operator-webhook-server-cert" not found Dec 02 10:32:16 crc kubenswrapper[4813]: I1202 10:32:16.471168 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:16 crc kubenswrapper[4813]: E1202 10:32:16.471410 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:16 crc kubenswrapper[4813]: E1202 10:32:16.471457 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert podName:8e626c15-e204-4729-8c0f-95b7b101ec43 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:24.471442374 +0000 UTC m=+1468.666616676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" (UID: "8e626c15-e204-4729-8c0f-95b7b101ec43") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 10:32:16 crc kubenswrapper[4813]: I1202 10:32:16.775449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:16 crc kubenswrapper[4813]: I1202 10:32:16.775573 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:16 crc kubenswrapper[4813]: E1202 10:32:16.775726 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 10:32:16 crc kubenswrapper[4813]: E1202 10:32:16.775780 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:24.775763749 +0000 UTC m=+1468.970938051 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "metrics-server-cert" not found Dec 02 10:32:16 crc kubenswrapper[4813]: E1202 10:32:16.776178 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:32:16 crc kubenswrapper[4813]: E1202 10:32:16.776209 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:24.776201361 +0000 UTC m=+1468.971375663 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "webhook-server-cert" not found Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.198124 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.203668 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baa2abea-8891-4e33-b453-e34dc8e15df7-cert\") pod \"infra-operator-controller-manager-57548d458d-2sk2z\" (UID: \"baa2abea-8891-4e33-b453-e34dc8e15df7\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.316946 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.502225 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.506767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e626c15-e204-4729-8c0f-95b7b101ec43-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg\" (UID: \"8e626c15-e204-4729-8c0f-95b7b101ec43\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.593316 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.805377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.805502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:24 crc kubenswrapper[4813]: E1202 10:32:24.805596 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 10:32:24 crc kubenswrapper[4813]: E1202 10:32:24.805681 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs podName:80e020ca-18e4-47c4-aaa7-30eba6e9dfd8 nodeName:}" failed. No retries permitted until 2025-12-02 10:32:40.805663771 +0000 UTC m=+1485.000838103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs") pod "openstack-operator-controller-manager-65b4bc588-254sd" (UID: "80e020ca-18e4-47c4-aaa7-30eba6e9dfd8") : secret "webhook-server-cert" not found Dec 02 10:32:24 crc kubenswrapper[4813]: I1202 10:32:24.809579 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-metrics-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:29 crc kubenswrapper[4813]: E1202 10:32:29.933228 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 02 10:32:29 crc kubenswrapper[4813]: E1202 10:32:29.934351 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t8p5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-grz2d_openstack-operators(da18c237-cd3d-4116-9373-989eaf92e7cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:32:29 crc kubenswrapper[4813]: E1202 10:32:29.952870 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 02 10:32:29 crc kubenswrapper[4813]: E1202 10:32:29.953185 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vknml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-dprfp_openstack-operators(2f7373b2-cc78-4f73-9ed5-23d0c3144867): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:32:30 crc kubenswrapper[4813]: I1202 10:32:30.461399 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z"] Dec 02 10:32:30 crc kubenswrapper[4813]: W1202 10:32:30.501339 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaa2abea_8891_4e33_b453_e34dc8e15df7.slice/crio-1632747e2de7e058bb9144a29021ff238815f347a1a9945634141821ed2dcfe9 WatchSource:0}: Error finding container 1632747e2de7e058bb9144a29021ff238815f347a1a9945634141821ed2dcfe9: Status 404 returned error can't find the container with id 1632747e2de7e058bb9144a29021ff238815f347a1a9945634141821ed2dcfe9 Dec 02 10:32:30 crc kubenswrapper[4813]: I1202 10:32:30.555892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" event={"ID":"4da17b88-c060-41ed-ab38-90dc8dd0383e","Type":"ContainerStarted","Data":"8d659515f2c665d0923cae1ca73cb1f75453bacabc7247b26185ed011cd603bd"} Dec 02 10:32:30 crc kubenswrapper[4813]: I1202 10:32:30.573421 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg"] Dec 02 10:32:30 crc kubenswrapper[4813]: I1202 10:32:30.575769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" event={"ID":"baa2abea-8891-4e33-b453-e34dc8e15df7","Type":"ContainerStarted","Data":"1632747e2de7e058bb9144a29021ff238815f347a1a9945634141821ed2dcfe9"} Dec 02 10:32:30 crc kubenswrapper[4813]: W1202 10:32:30.612438 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e626c15_e204_4729_8c0f_95b7b101ec43.slice/crio-aff5c2e55d2db6aa7b9564dc6f2853c5ebc93cc7de5eba66a857eda8ef74f9b0 WatchSource:0}: Error finding container aff5c2e55d2db6aa7b9564dc6f2853c5ebc93cc7de5eba66a857eda8ef74f9b0: Status 404 returned error can't find the container with id aff5c2e55d2db6aa7b9564dc6f2853c5ebc93cc7de5eba66a857eda8ef74f9b0 Dec 02 10:32:30 crc kubenswrapper[4813]: E1202 10:32:30.760003 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f5lc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-xbnzh_openstack-operators(2b41c1b0-929f-4289-b50d-5567c79a26d8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:30 crc kubenswrapper[4813]: E1202 10:32:30.762123 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" podUID="2b41c1b0-929f-4289-b50d-5567c79a26d8" Dec 02 10:32:30 crc kubenswrapper[4813]: E1202 10:32:30.775729 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pglm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-525b9_openstack-operators(2ba57cac-e437-4de6-a3fa-563d41cd0404): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:30 crc kubenswrapper[4813]: E1202 10:32:30.776941 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" podUID="2ba57cac-e437-4de6-a3fa-563d41cd0404" Dec 02 10:32:30 crc kubenswrapper[4813]: E1202 10:32:30.827650 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cr27j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-sl4ml_openstack-operators(7543bebd-caf8-49db-99ce-fed3b5ac812a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 02 10:32:30 crc kubenswrapper[4813]: E1202 10:32:30.828778 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" podUID="7543bebd-caf8-49db-99ce-fed3b5ac812a" Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.585116 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" event={"ID":"2b41c1b0-929f-4289-b50d-5567c79a26d8","Type":"ContainerStarted","Data":"5f75d57d6524379009c991e9aa437b6c3e3d6265722af2cd3f90c04ea3ab3c2c"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.592206 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" Dec 02 10:32:31 crc kubenswrapper[4813]: E1202 10:32:31.599437 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" podUID="2b41c1b0-929f-4289-b50d-5567c79a26d8" Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.624512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" event={"ID":"95242ae1-57e8-436f-9971-66e273b0d75c","Type":"ContainerStarted","Data":"080de0f3c19bfbe3590728b596a7fddcb4fd628018b439841840595d3f67d9f5"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.633696 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" event={"ID":"7543bebd-caf8-49db-99ce-fed3b5ac812a","Type":"ContainerStarted","Data":"4941961892b0a28beed56ec5bbb479b38c614644235005e2e2a7156020a37846"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.634381 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" Dec 02 10:32:31 crc kubenswrapper[4813]: E1202 10:32:31.643992 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" podUID="7543bebd-caf8-49db-99ce-fed3b5ac812a" Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.659541 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" event={"ID":"98f2dfc1-669a-430c-a089-859de7ca1688","Type":"ContainerStarted","Data":"369dc46e6ae88806376381c1271982c79c26125f0430952f94497fd848448978"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.673452 4813 generic.go:334] "Generic (PLEG): container finished" podID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerID="b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb" exitCode=0 Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.673516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd2b2" event={"ID":"a29b0d41-84ff-4b6a-9ee4-529e207c6a09","Type":"ContainerDied","Data":"b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.676549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" event={"ID":"c78e6c08-10b5-442c-bcc4-96e55238f240","Type":"ContainerStarted","Data":"31d0ce9c658b24756b82e35951448fd69063563c2e1dcafa475c3c54bed2b893"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.684587 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" event={"ID":"a9b5d3a4-c74a-4dc7-95e7-ce34faf34401","Type":"ContainerStarted","Data":"7ad27f93ec06edb71f75160b43d3ba3e4f60efdf35d3c67d1f4de93562d98e70"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.713038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" event={"ID":"2ba57cac-e437-4de6-a3fa-563d41cd0404","Type":"ContainerStarted","Data":"c865c21b53d2cc3603f0957781d4cc3440b2d6bf248c02edc97ecad0507fb0bc"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.713927 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" Dec 02 10:32:31 crc kubenswrapper[4813]: E1202 10:32:31.716085 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" podUID="2ba57cac-e437-4de6-a3fa-563d41cd0404" Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.718522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" event={"ID":"8e626c15-e204-4729-8c0f-95b7b101ec43","Type":"ContainerStarted","Data":"aff5c2e55d2db6aa7b9564dc6f2853c5ebc93cc7de5eba66a857eda8ef74f9b0"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.727973 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" event={"ID":"9de86006-d480-4e91-904d-dea58373d496","Type":"ContainerStarted","Data":"a8f808e088ae8435e7d9042468383d09b26fe9ead090c0d32ea40a36623b0d04"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.752987 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" event={"ID":"796ef4ca-26ba-44f0-b23a-c4fd808c5981","Type":"ContainerStarted","Data":"bd68fc1e8d9885e381519c0ce203769cb0667ba4bd71690a211c5141d539c9d3"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.756110 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" event={"ID":"c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4","Type":"ContainerStarted","Data":"928d2e0efa7537cda7b06388625c3caeaa2dd26cb3fdc751c6b708bdc916cf22"} Dec 02 10:32:31 crc kubenswrapper[4813]: I1202 10:32:31.759870 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" event={"ID":"3bed8e3c-64ca-47e0-80b2-ec2f40473db9","Type":"ContainerStarted","Data":"93ccd2256688e566da4b01eea384aa706bca518d731952dc7bd7eec6bda4bbe2"} Dec 02 10:32:32 crc kubenswrapper[4813]: E1202 10:32:32.783348 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" podUID="7543bebd-caf8-49db-99ce-fed3b5ac812a" Dec 02 10:32:32 crc kubenswrapper[4813]: E1202 10:32:32.783686 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" podUID="2ba57cac-e437-4de6-a3fa-563d41cd0404" Dec 02 10:32:32 crc kubenswrapper[4813]: E1202 10:32:32.783726 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" podUID="2b41c1b0-929f-4289-b50d-5567c79a26d8" Dec 02 10:32:34 crc kubenswrapper[4813]: I1202 10:32:34.274213 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:32:34 crc kubenswrapper[4813]: I1202 10:32:34.274678 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:32:38 crc kubenswrapper[4813]: I1202 10:32:38.579357 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" Dec 02 10:32:38 crc kubenswrapper[4813]: E1202 10:32:38.582341 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" podUID="2ba57cac-e437-4de6-a3fa-563d41cd0404" Dec 02 10:32:38 crc kubenswrapper[4813]: I1202 10:32:38.963885 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" Dec 02 10:32:38 crc kubenswrapper[4813]: E1202 10:32:38.966249 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" podUID="2b41c1b0-929f-4289-b50d-5567c79a26d8" Dec 02 10:32:39 crc kubenswrapper[4813]: I1202 10:32:39.059685 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" Dec 02 10:32:39 crc kubenswrapper[4813]: E1202 10:32:39.061511 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" podUID="7543bebd-caf8-49db-99ce-fed3b5ac812a" Dec 02 10:32:40 crc kubenswrapper[4813]: I1202 10:32:40.861240 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:40 crc kubenswrapper[4813]: I1202 10:32:40.870239 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80e020ca-18e4-47c4-aaa7-30eba6e9dfd8-webhook-certs\") pod \"openstack-operator-controller-manager-65b4bc588-254sd\" (UID: \"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8\") " pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:41 crc kubenswrapper[4813]: I1202 10:32:41.146064 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:54 crc kubenswrapper[4813]: E1202 10:32:54.688097 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 02 10:32:54 crc kubenswrapper[4813]: E1202 10:32:54.688757 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9dp9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lwjwp_openstack-operators(afe1c5ed-adc9-4200-b1c0-8938e759daed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:32:54 crc kubenswrapper[4813]: E1202 10:32:54.690032 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" podUID="afe1c5ed-adc9-4200-b1c0-8938e759daed" Dec 02 10:32:55 crc kubenswrapper[4813]: I1202 10:32:55.071167 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd"] Dec 02 10:32:55 crc kubenswrapper[4813]: W1202 10:32:55.102256 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80e020ca_18e4_47c4_aaa7_30eba6e9dfd8.slice/crio-971a8984969e655c3b4195ba88d0d9809e72c3ed42e9b94ced33b0042154f80f WatchSource:0}: Error finding container 971a8984969e655c3b4195ba88d0d9809e72c3ed42e9b94ced33b0042154f80f: Status 404 returned error can't find the container with id 971a8984969e655c3b4195ba88d0d9809e72c3ed42e9b94ced33b0042154f80f Dec 02 10:32:55 crc kubenswrapper[4813]: E1202 10:32:55.730266 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" podUID="da18c237-cd3d-4116-9373-989eaf92e7cd" Dec 02 10:32:55 crc kubenswrapper[4813]: I1202 10:32:55.972870 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" event={"ID":"f1a3ada5-a084-4500-8c1b-a9e6e3008786","Type":"ContainerStarted","Data":"05b98e867fb265af22a19b1a3922f064bafbd8c12897011c3efe18745f56e82b"} Dec 02 10:32:55 crc kubenswrapper[4813]: I1202 10:32:55.987284 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" event={"ID":"da18c237-cd3d-4116-9373-989eaf92e7cd","Type":"ContainerStarted","Data":"d5f2cac8f845b73b99e610f52b1b6b5c2fbcefaa62d7e55b8ca4303b8db2b72b"} Dec 02 10:32:55 crc kubenswrapper[4813]: I1202 10:32:55.991373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" event={"ID":"a22bf838-4122-4704-b8a7-d590e3ba5b65","Type":"ContainerStarted","Data":"c6891960f127ea9f64a424779a12ee4da1b7edabd0f7c80c2a641b829b1e7528"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.005654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" event={"ID":"b2092fa1-ae34-44b4-b89f-d2c1407b911a","Type":"ContainerStarted","Data":"b41c565e93f47f69848b94a2e56e077b87842dc36607e9f050bd680bbb61c493"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.025665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" event={"ID":"7543bebd-caf8-49db-99ce-fed3b5ac812a","Type":"ContainerStarted","Data":"df325f4f090d2e68a8ecc795907c3afd7288907ea8604b108cac288069509268"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.059517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" event={"ID":"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8","Type":"ContainerStarted","Data":"bc8211e7254f545606eec13240a11beecfa08826477625a1acf42941889a0fe9"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.059576 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" event={"ID":"80e020ca-18e4-47c4-aaa7-30eba6e9dfd8","Type":"ContainerStarted","Data":"971a8984969e655c3b4195ba88d0d9809e72c3ed42e9b94ced33b0042154f80f"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.059963 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.176730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" event={"ID":"aff40ee1-2e46-4923-8138-09046b9568dd","Type":"ContainerStarted","Data":"1cd593c73a76a91a066a804cb356ac8dc6c9bc0e09d1bf4deb0d51115d4bb477"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.220536 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" event={"ID":"8e626c15-e204-4729-8c0f-95b7b101ec43","Type":"ContainerStarted","Data":"8d41692c27cedd9e119a9020ccdba8e6e9be1889af06dbf9bbe35cd3296f2116"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.232545 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" podStartSLOduration=48.23250504 podStartE2EDuration="48.23250504s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:32:56.23178533 +0000 UTC m=+1500.426959652" watchObservedRunningTime="2025-12-02 10:32:56.23250504 +0000 UTC m=+1500.427679372" Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.248678 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" event={"ID":"baa2abea-8891-4e33-b453-e34dc8e15df7","Type":"ContainerStarted","Data":"e70548849055814ee538bdcb931b636db3e51bef64aea736a0a906bd3cf86869"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.273722 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl4ml" podStartSLOduration=28.732151561 podStartE2EDuration="48.273704324s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.452960712 +0000 UTC m=+1454.648135014" lastFinishedPulling="2025-12-02 10:32:29.994513475 +0000 UTC m=+1474.189687777" observedRunningTime="2025-12-02 10:32:56.2656892 +0000 UTC m=+1500.460863502" watchObservedRunningTime="2025-12-02 10:32:56.273704324 +0000 UTC m=+1500.468878626" Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.292469 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd2b2" event={"ID":"a29b0d41-84ff-4b6a-9ee4-529e207c6a09","Type":"ContainerStarted","Data":"744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.295194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" event={"ID":"c78e6c08-10b5-442c-bcc4-96e55238f240","Type":"ContainerStarted","Data":"fcf525e355ee0f932e76ce9649cba03750a7e2d15b5103e1ffc872faceeb94b9"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.296182 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.311266 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.321358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" event={"ID":"7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec","Type":"ContainerStarted","Data":"b6c24e7363edd71b65720830ad12668970f0d2d4dbb1d177f2b7c05f03ee8413"} Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.368662 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nd2b2" podStartSLOduration=6.343721477 podStartE2EDuration="48.368643134s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:12.700605331 +0000 UTC m=+1456.895779633" lastFinishedPulling="2025-12-02 10:32:54.725526988 +0000 UTC m=+1498.920701290" observedRunningTime="2025-12-02 10:32:56.326935435 +0000 UTC m=+1500.522109737" watchObservedRunningTime="2025-12-02 10:32:56.368643134 +0000 UTC m=+1500.563817436" Dec 02 10:32:56 crc kubenswrapper[4813]: I1202 10:32:56.370611 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78c47498c4-pwr72" podStartSLOduration=4.383993252 podStartE2EDuration="49.370604489s" podCreationTimestamp="2025-12-02 10:32:07 +0000 UTC" firstStartedPulling="2025-12-02 10:32:09.882908145 +0000 UTC m=+1454.078082447" lastFinishedPulling="2025-12-02 10:32:54.869519382 +0000 UTC m=+1499.064693684" observedRunningTime="2025-12-02 10:32:56.362711528 +0000 UTC m=+1500.557885830" watchObservedRunningTime="2025-12-02 10:32:56.370604489 +0000 UTC m=+1500.565778791" Dec 02 10:32:56 crc kubenswrapper[4813]: E1202 10:32:56.897506 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" podUID="2f7373b2-cc78-4f73-9ed5-23d0c3144867" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.328597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" event={"ID":"8e626c15-e204-4729-8c0f-95b7b101ec43","Type":"ContainerStarted","Data":"bda5aa328bec6f7cae8812887ffb2866de737e3824fd1c6fce00a2372f8a6304"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.328676 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.330236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" event={"ID":"b2092fa1-ae34-44b4-b89f-d2c1407b911a","Type":"ContainerStarted","Data":"3f7e5cf423a0746d36186320c346d0461d1a8630ab2c69c271adcd74e24e67fb"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.330992 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.332267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" event={"ID":"3bed8e3c-64ca-47e0-80b2-ec2f40473db9","Type":"ContainerStarted","Data":"ad22a4c2e9e22bc63a26ab14d3bf0254af6e8ab8a1c8aa8b80739bc866031fec"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.332980 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.336210 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.336714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" event={"ID":"aff40ee1-2e46-4923-8138-09046b9568dd","Type":"ContainerStarted","Data":"de31d688cc602a8c2e11e8ba07bca9f697701bc7cb6adc11fd8e36026a78c50f"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.337236 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.339016 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" event={"ID":"c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4","Type":"ContainerStarted","Data":"819c7f71cc551727a253b8d69c8043ea255a36bfe9989bacf2f019448ad4495d"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.340002 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.341860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" event={"ID":"a22bf838-4122-4704-b8a7-d590e3ba5b65","Type":"ContainerStarted","Data":"69d0bc7cc9df7c163d7350d779c787e967f2a863efecc1220c1828d3980f63c6"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.342058 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.342277 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.343642 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" event={"ID":"2ba57cac-e437-4de6-a3fa-563d41cd0404","Type":"ContainerStarted","Data":"5759d5fc46a495e519f1101d5e6ad0b65a4cce242c33b3a511559387e6cbdf5b"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.345788 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" event={"ID":"95242ae1-57e8-436f-9971-66e273b0d75c","Type":"ContainerStarted","Data":"be58deb21a56cdc5db1162a08f4b31a0fbbeac02954983047560fc5de0ebdfe9"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.346386 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.347962 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.348337 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" event={"ID":"2f7373b2-cc78-4f73-9ed5-23d0c3144867","Type":"ContainerStarted","Data":"8594bf696a7407a69b0904a56767d8ac4f3d948ba51e1e5559ef299aba110f52"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.354070 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" event={"ID":"a9b5d3a4-c74a-4dc7-95e7-ce34faf34401","Type":"ContainerStarted","Data":"1a8288a4f1c6a89686499a3cf1fa87702aca076b8ace3adb5b877b287af9c424"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.354487 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.356618 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.357736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" event={"ID":"f1a3ada5-a084-4500-8c1b-a9e6e3008786","Type":"ContainerStarted","Data":"eb74ea820aec4cd4ad8e26e51b69d04a0b11296b7c879a71d502d8ae738b9730"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.357775 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.359298 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" event={"ID":"9de86006-d480-4e91-904d-dea58373d496","Type":"ContainerStarted","Data":"eab3e91ae30bdee0564468f49802cdaebb33b45fa9d0f1c27fb84a2200946bc5"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.359944 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.361456 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.362049 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" event={"ID":"4da17b88-c060-41ed-ab38-90dc8dd0383e","Type":"ContainerStarted","Data":"1a492a4eb443a3d826f57af54b59e968d4653659d67ff5c4f7204458eb38590b"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.362381 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.363703 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.364594 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" event={"ID":"baa2abea-8891-4e33-b453-e34dc8e15df7","Type":"ContainerStarted","Data":"1021dfe6766376fe8ba93be63d2d91d302cf6de2da6fc5c33382b1142f575681"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.364966 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.366831 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" podStartSLOduration=25.264952613 podStartE2EDuration="49.366821973s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:30.624364778 +0000 UTC m=+1474.819539090" lastFinishedPulling="2025-12-02 10:32:54.726234148 +0000 UTC m=+1498.921408450" observedRunningTime="2025-12-02 10:32:57.361662698 +0000 UTC m=+1501.556837000" watchObservedRunningTime="2025-12-02 10:32:57.366821973 +0000 UTC m=+1501.561996275" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.372359 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" event={"ID":"796ef4ca-26ba-44f0-b23a-c4fd808c5981","Type":"ContainerStarted","Data":"4cac8588a09274d8eda99dabd16601de5c990e2db6bc1441971ec8ef2202e2a3"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.373194 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.375831 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.376188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" event={"ID":"7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec","Type":"ContainerStarted","Data":"083904c8da473d3bfbefd2c33776fc41aee7aeab94f30e26d6973c5b8c44afa3"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.376660 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.379150 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" event={"ID":"da18c237-cd3d-4116-9373-989eaf92e7cd","Type":"ContainerStarted","Data":"0d7de9d594f6bbe442465f8e6882bc06e6a1507c0e50c6a4f7265364ea1cc847"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.379704 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.381224 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" event={"ID":"2b41c1b0-929f-4289-b50d-5567c79a26d8","Type":"ContainerStarted","Data":"01a12016fc7f55058eaf00f9bd6badc4e2c2feb5107a888bb27c5fd18b55b930"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.383937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" event={"ID":"98f2dfc1-669a-430c-a089-859de7ca1688","Type":"ContainerStarted","Data":"5c60083e0d8fb9a3b43ad071920e780aadaf75c3c69e856296b73f0e831869ae"} Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.383962 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.386029 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.392440 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-l6sr9" podStartSLOduration=4.890246841 podStartE2EDuration="49.39242688s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.455512203 +0000 UTC m=+1454.650686505" lastFinishedPulling="2025-12-02 10:32:54.957692242 +0000 UTC m=+1499.152866544" observedRunningTime="2025-12-02 10:32:57.390221498 +0000 UTC m=+1501.585395800" watchObservedRunningTime="2025-12-02 10:32:57.39242688 +0000 UTC m=+1501.587601182" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.428773 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" podStartSLOduration=5.204562595 podStartE2EDuration="49.428751157s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.463367003 +0000 UTC m=+1454.658541305" lastFinishedPulling="2025-12-02 10:32:54.687555575 +0000 UTC m=+1498.882729867" observedRunningTime="2025-12-02 10:32:57.41955743 +0000 UTC m=+1501.614731732" watchObservedRunningTime="2025-12-02 10:32:57.428751157 +0000 UTC m=+1501.623925469" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.443515 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vh67z" podStartSLOduration=4.869262833 podStartE2EDuration="50.44349638s" podCreationTimestamp="2025-12-02 10:32:07 +0000 UTC" firstStartedPulling="2025-12-02 10:32:09.274690898 +0000 UTC m=+1453.469865200" lastFinishedPulling="2025-12-02 10:32:54.848924445 +0000 UTC m=+1499.044098747" observedRunningTime="2025-12-02 10:32:57.439596861 +0000 UTC m=+1501.634771183" watchObservedRunningTime="2025-12-02 10:32:57.44349638 +0000 UTC m=+1501.638670682" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.517605 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" podStartSLOduration=17.041485537 podStartE2EDuration="49.517581306s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.488291171 +0000 UTC m=+1454.683465473" lastFinishedPulling="2025-12-02 10:32:42.96438694 +0000 UTC m=+1487.159561242" observedRunningTime="2025-12-02 10:32:57.479432277 +0000 UTC m=+1501.674606589" watchObservedRunningTime="2025-12-02 10:32:57.517581306 +0000 UTC m=+1501.712755608" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.532135 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-525b9" podStartSLOduration=29.948550693 podStartE2EDuration="49.532059121s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.338974959 +0000 UTC m=+1454.534149261" lastFinishedPulling="2025-12-02 10:32:29.922483387 +0000 UTC m=+1474.117657689" observedRunningTime="2025-12-02 10:32:57.518519512 +0000 UTC m=+1501.713693814" watchObservedRunningTime="2025-12-02 10:32:57.532059121 +0000 UTC m=+1501.727233423" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.550151 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" podStartSLOduration=5.36537707 podStartE2EDuration="49.550131287s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.463353553 +0000 UTC m=+1454.658527855" lastFinishedPulling="2025-12-02 10:32:54.64810777 +0000 UTC m=+1498.843282072" observedRunningTime="2025-12-02 10:32:57.549894171 +0000 UTC m=+1501.745068463" watchObservedRunningTime="2025-12-02 10:32:57.550131287 +0000 UTC m=+1501.745305599" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.620959 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-ptlqc" podStartSLOduration=5.015008847 podStartE2EDuration="50.620932761s" podCreationTimestamp="2025-12-02 10:32:07 +0000 UTC" firstStartedPulling="2025-12-02 10:32:09.27261552 +0000 UTC m=+1453.467789822" lastFinishedPulling="2025-12-02 10:32:54.878539434 +0000 UTC m=+1499.073713736" observedRunningTime="2025-12-02 10:32:57.586416154 +0000 UTC m=+1501.781590446" watchObservedRunningTime="2025-12-02 10:32:57.620932761 +0000 UTC m=+1501.816107063" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.643078 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8sknj" podStartSLOduration=5.225449001 podStartE2EDuration="49.6430552s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.44789662 +0000 UTC m=+1454.643070922" lastFinishedPulling="2025-12-02 10:32:54.865502799 +0000 UTC m=+1499.060677121" observedRunningTime="2025-12-02 10:32:57.639349076 +0000 UTC m=+1501.834523378" watchObservedRunningTime="2025-12-02 10:32:57.6430552 +0000 UTC m=+1501.838229502" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.677788 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" podStartSLOduration=5.485310861 podStartE2EDuration="49.677769883s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.455523094 +0000 UTC m=+1454.650697396" lastFinishedPulling="2025-12-02 10:32:54.647982116 +0000 UTC m=+1498.843156418" observedRunningTime="2025-12-02 10:32:57.671575479 +0000 UTC m=+1501.866749791" watchObservedRunningTime="2025-12-02 10:32:57.677769883 +0000 UTC m=+1501.872944185" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.696015 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-z2z7m" podStartSLOduration=5.1774958269999996 podStartE2EDuration="49.695991983s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.443553868 +0000 UTC m=+1454.638728170" lastFinishedPulling="2025-12-02 10:32:54.962050024 +0000 UTC m=+1499.157224326" observedRunningTime="2025-12-02 10:32:57.692589728 +0000 UTC m=+1501.887764030" watchObservedRunningTime="2025-12-02 10:32:57.695991983 +0000 UTC m=+1501.891166285" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.724466 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-fd42j" podStartSLOduration=5.725305823 podStartE2EDuration="50.72444779s" podCreationTimestamp="2025-12-02 10:32:07 +0000 UTC" firstStartedPulling="2025-12-02 10:32:09.882429662 +0000 UTC m=+1454.077603964" lastFinishedPulling="2025-12-02 10:32:54.881571629 +0000 UTC m=+1499.076745931" observedRunningTime="2025-12-02 10:32:57.723197455 +0000 UTC m=+1501.918371767" watchObservedRunningTime="2025-12-02 10:32:57.72444779 +0000 UTC m=+1501.919622082" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.759781 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" podStartSLOduration=6.576934097 podStartE2EDuration="50.759740179s" podCreationTimestamp="2025-12-02 10:32:07 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.465105982 +0000 UTC m=+1454.660280284" lastFinishedPulling="2025-12-02 10:32:54.647912064 +0000 UTC m=+1498.843086366" observedRunningTime="2025-12-02 10:32:57.752927138 +0000 UTC m=+1501.948101450" watchObservedRunningTime="2025-12-02 10:32:57.759740179 +0000 UTC m=+1501.954914481" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.784528 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xbnzh" podStartSLOduration=30.185305984 podStartE2EDuration="49.784506632s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.397822377 +0000 UTC m=+1454.592996679" lastFinishedPulling="2025-12-02 10:32:29.997023025 +0000 UTC m=+1474.192197327" observedRunningTime="2025-12-02 10:32:57.776866328 +0000 UTC m=+1501.972040630" watchObservedRunningTime="2025-12-02 10:32:57.784506632 +0000 UTC m=+1501.979680934" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.810703 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" podStartSLOduration=3.660203027 podStartE2EDuration="49.810680726s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.37721422 +0000 UTC m=+1454.572388522" lastFinishedPulling="2025-12-02 10:32:56.527691919 +0000 UTC m=+1500.722866221" observedRunningTime="2025-12-02 10:32:57.806012525 +0000 UTC m=+1502.001186827" watchObservedRunningTime="2025-12-02 10:32:57.810680726 +0000 UTC m=+1502.005855028" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.836943 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-c5x5p" podStartSLOduration=5.291325936 podStartE2EDuration="49.836928901s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.448404054 +0000 UTC m=+1454.643578356" lastFinishedPulling="2025-12-02 10:32:54.994007019 +0000 UTC m=+1499.189181321" observedRunningTime="2025-12-02 10:32:57.834476912 +0000 UTC m=+1502.029651204" watchObservedRunningTime="2025-12-02 10:32:57.836928901 +0000 UTC m=+1502.032103203" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.876233 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-5n7pc" podStartSLOduration=5.324184266 podStartE2EDuration="49.876212091s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.442980942 +0000 UTC m=+1454.638155254" lastFinishedPulling="2025-12-02 10:32:54.995008777 +0000 UTC m=+1499.190183079" observedRunningTime="2025-12-02 10:32:57.866256892 +0000 UTC m=+1502.061431194" watchObservedRunningTime="2025-12-02 10:32:57.876212091 +0000 UTC m=+1502.071386393" Dec 02 10:32:57 crc kubenswrapper[4813]: I1202 10:32:57.887679 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" podStartSLOduration=25.676521191 podStartE2EDuration="49.887648262s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:30.51448942 +0000 UTC m=+1474.709663722" lastFinishedPulling="2025-12-02 10:32:54.725616491 +0000 UTC m=+1498.920790793" observedRunningTime="2025-12-02 10:32:57.883480765 +0000 UTC m=+1502.078655067" watchObservedRunningTime="2025-12-02 10:32:57.887648262 +0000 UTC m=+1502.082822564" Dec 02 10:32:58 crc kubenswrapper[4813]: I1202 10:32:58.873884 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:58 crc kubenswrapper[4813]: I1202 10:32:58.874479 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:58 crc kubenswrapper[4813]: I1202 10:32:58.922760 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:32:59 crc kubenswrapper[4813]: I1202 10:32:59.398172 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" event={"ID":"2f7373b2-cc78-4f73-9ed5-23d0c3144867","Type":"ContainerStarted","Data":"3db191134ccfaa4f0df8af5611bd098ebb8111247ef229b9dbd5bd7b6313ff0d"} Dec 02 10:32:59 crc kubenswrapper[4813]: I1202 10:32:59.420669 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" podStartSLOduration=2.747932844 podStartE2EDuration="51.420649922s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.443187968 +0000 UTC m=+1454.638362270" lastFinishedPulling="2025-12-02 10:32:59.115905046 +0000 UTC m=+1503.311079348" observedRunningTime="2025-12-02 10:32:59.415605821 +0000 UTC m=+1503.610780123" watchObservedRunningTime="2025-12-02 10:32:59.420649922 +0000 UTC m=+1503.615824224" Dec 02 10:33:00 crc kubenswrapper[4813]: I1202 10:33:00.404560 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" Dec 02 10:33:01 crc kubenswrapper[4813]: I1202 10:33:01.152744 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65b4bc588-254sd" Dec 02 10:33:04 crc kubenswrapper[4813]: I1202 10:33:04.273653 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:33:04 crc kubenswrapper[4813]: I1202 10:33:04.274004 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:33:04 crc kubenswrapper[4813]: I1202 10:33:04.274060 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:33:04 crc kubenswrapper[4813]: I1202 10:33:04.274776 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:33:04 crc kubenswrapper[4813]: I1202 10:33:04.274840 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" gracePeriod=600 Dec 02 10:33:04 crc kubenswrapper[4813]: I1202 10:33:04.324525 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-2sk2z" Dec 02 10:33:04 crc kubenswrapper[4813]: I1202 10:33:04.600709 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg" Dec 02 10:33:05 crc kubenswrapper[4813]: E1202 10:33:05.031256 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:33:05 crc kubenswrapper[4813]: I1202 10:33:05.443540 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" exitCode=0 Dec 02 10:33:05 crc kubenswrapper[4813]: I1202 10:33:05.443593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376"} Dec 02 10:33:05 crc kubenswrapper[4813]: I1202 10:33:05.443641 4813 scope.go:117] "RemoveContainer" containerID="6026076896f55bb919161f6d03c4a9615a39a32a45726f9be0f5d24c59e6a733" Dec 02 10:33:05 crc kubenswrapper[4813]: I1202 10:33:05.444118 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:33:05 crc kubenswrapper[4813]: E1202 10:33:05.444451 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:33:08 crc kubenswrapper[4813]: I1202 10:33:08.311639 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-sxk5l" Dec 02 10:33:08 crc kubenswrapper[4813]: I1202 10:33:08.748307 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-dprfp" Dec 02 10:33:08 crc kubenswrapper[4813]: I1202 10:33:08.918659 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:33:08 crc kubenswrapper[4813]: I1202 10:33:08.923853 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-grz2d" Dec 02 10:33:08 crc kubenswrapper[4813]: I1202 10:33:08.968955 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd2b2"] Dec 02 10:33:09 crc kubenswrapper[4813]: I1202 10:33:09.004302 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-wj22v" Dec 02 10:33:09 crc kubenswrapper[4813]: E1202 10:33:09.069362 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" podUID="afe1c5ed-adc9-4200-b1c0-8938e759daed" Dec 02 10:33:09 crc kubenswrapper[4813]: I1202 10:33:09.103470 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wxld7" Dec 02 10:33:09 crc kubenswrapper[4813]: I1202 10:33:09.316248 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-nhfvs" Dec 02 10:33:09 crc kubenswrapper[4813]: I1202 10:33:09.477220 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nd2b2" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerName="registry-server" containerID="cri-o://744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6" gracePeriod=2 Dec 02 10:33:09 crc kubenswrapper[4813]: I1202 10:33:09.632298 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-bflrb" Dec 02 10:33:09 crc kubenswrapper[4813]: I1202 10:33:09.921221 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.078393 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-catalog-content\") pod \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.078513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-utilities\") pod \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.078554 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpcz4\" (UniqueName: \"kubernetes.io/projected/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-kube-api-access-vpcz4\") pod \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\" (UID: \"a29b0d41-84ff-4b6a-9ee4-529e207c6a09\") " Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.079622 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-utilities" (OuterVolumeSpecName: "utilities") pod "a29b0d41-84ff-4b6a-9ee4-529e207c6a09" (UID: "a29b0d41-84ff-4b6a-9ee4-529e207c6a09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.084295 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-kube-api-access-vpcz4" (OuterVolumeSpecName: "kube-api-access-vpcz4") pod "a29b0d41-84ff-4b6a-9ee4-529e207c6a09" (UID: "a29b0d41-84ff-4b6a-9ee4-529e207c6a09"). InnerVolumeSpecName "kube-api-access-vpcz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.096012 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a29b0d41-84ff-4b6a-9ee4-529e207c6a09" (UID: "a29b0d41-84ff-4b6a-9ee4-529e207c6a09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.179968 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.180353 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpcz4\" (UniqueName: \"kubernetes.io/projected/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-kube-api-access-vpcz4\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.180442 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29b0d41-84ff-4b6a-9ee4-529e207c6a09-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.492049 4813 generic.go:334] "Generic (PLEG): container finished" podID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerID="744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6" exitCode=0 Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.492113 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd2b2" event={"ID":"a29b0d41-84ff-4b6a-9ee4-529e207c6a09","Type":"ContainerDied","Data":"744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6"} Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.492132 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nd2b2" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.492162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nd2b2" event={"ID":"a29b0d41-84ff-4b6a-9ee4-529e207c6a09","Type":"ContainerDied","Data":"54eac5a2ef7775eece9ff87d97094d7f19f27de2a4ebfedc20a2e5b76f5c786f"} Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.492188 4813 scope.go:117] "RemoveContainer" containerID="744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.517544 4813 scope.go:117] "RemoveContainer" containerID="b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.536627 4813 scope.go:117] "RemoveContainer" containerID="0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.540846 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd2b2"] Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.546476 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nd2b2"] Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.569517 4813 scope.go:117] "RemoveContainer" containerID="744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6" Dec 02 10:33:10 crc kubenswrapper[4813]: E1202 10:33:10.570269 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6\": container with ID starting with 744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6 not found: ID does not exist" containerID="744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.570440 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6"} err="failed to get container status \"744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6\": rpc error: code = NotFound desc = could not find container \"744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6\": container with ID starting with 744f3948fcc8c25dfbbe16f6612db57798e03e9de03bee25a4943094681e7ef6 not found: ID does not exist" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.570568 4813 scope.go:117] "RemoveContainer" containerID="b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb" Dec 02 10:33:10 crc kubenswrapper[4813]: E1202 10:33:10.571428 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb\": container with ID starting with b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb not found: ID does not exist" containerID="b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.571466 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb"} err="failed to get container status \"b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb\": rpc error: code = NotFound desc = could not find container \"b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb\": container with ID starting with b00c4f299ccf05deebca4f2ed9efadd6a3512bf115981cdaa04903da363368bb not found: ID does not exist" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.571489 4813 scope.go:117] "RemoveContainer" containerID="0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158" Dec 02 10:33:10 crc kubenswrapper[4813]: E1202 10:33:10.572294 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158\": container with ID starting with 0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158 not found: ID does not exist" containerID="0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158" Dec 02 10:33:10 crc kubenswrapper[4813]: I1202 10:33:10.572395 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158"} err="failed to get container status \"0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158\": rpc error: code = NotFound desc = could not find container \"0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158\": container with ID starting with 0dc004a9a340ee6d7e6e50ff6d0ae084e697b2333548f6245a1141a605f02158 not found: ID does not exist" Dec 02 10:33:12 crc kubenswrapper[4813]: I1202 10:33:12.079386 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" path="/var/lib/kubelet/pods/a29b0d41-84ff-4b6a-9ee4-529e207c6a09/volumes" Dec 02 10:33:17 crc kubenswrapper[4813]: I1202 10:33:17.068484 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:33:17 crc kubenswrapper[4813]: E1202 10:33:17.069771 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:33:24 crc kubenswrapper[4813]: I1202 10:33:24.598533 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" event={"ID":"afe1c5ed-adc9-4200-b1c0-8938e759daed","Type":"ContainerStarted","Data":"dff35983b531d365ba6919ea29deeb7603f807405e62b4d10437d919385f8eb4"} Dec 02 10:33:24 crc kubenswrapper[4813]: I1202 10:33:24.618803 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lwjwp" podStartSLOduration=3.5424599199999998 podStartE2EDuration="1m16.61877318s" podCreationTimestamp="2025-12-02 10:32:08 +0000 UTC" firstStartedPulling="2025-12-02 10:32:10.470141403 +0000 UTC m=+1454.665315705" lastFinishedPulling="2025-12-02 10:33:23.546454663 +0000 UTC m=+1527.741628965" observedRunningTime="2025-12-02 10:33:24.613268156 +0000 UTC m=+1528.808442478" watchObservedRunningTime="2025-12-02 10:33:24.61877318 +0000 UTC m=+1528.813947482" Dec 02 10:33:32 crc kubenswrapper[4813]: I1202 10:33:32.068578 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:33:32 crc kubenswrapper[4813]: E1202 10:33:32.069835 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.939759 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v59fl"] Dec 02 10:33:39 crc kubenswrapper[4813]: E1202 10:33:39.940770 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerName="extract-content" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.940788 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerName="extract-content" Dec 02 10:33:39 crc kubenswrapper[4813]: E1202 10:33:39.940820 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerName="registry-server" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.940828 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerName="registry-server" Dec 02 10:33:39 crc kubenswrapper[4813]: E1202 10:33:39.940849 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerName="extract-utilities" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.940857 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerName="extract-utilities" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.941019 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29b0d41-84ff-4b6a-9ee4-529e207c6a09" containerName="registry-server" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.944825 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.951871 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.952238 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.952277 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.953907 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v59fl"] Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.955925 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-782zb" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.986422 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lmpcd"] Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.987546 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:39 crc kubenswrapper[4813]: I1202 10:33:39.990682 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.004812 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lmpcd"] Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.086419 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-config\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.086808 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0bcd63-78a1-4f34-b025-187331eb158d-config\") pod \"dnsmasq-dns-675f4bcbfc-v59fl\" (UID: \"6c0bcd63-78a1-4f34-b025-187331eb158d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.087164 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.087266 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j72x7\" (UniqueName: \"kubernetes.io/projected/6c0bcd63-78a1-4f34-b025-187331eb158d-kube-api-access-j72x7\") pod \"dnsmasq-dns-675f4bcbfc-v59fl\" (UID: \"6c0bcd63-78a1-4f34-b025-187331eb158d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.087296 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4b5l\" (UniqueName: \"kubernetes.io/projected/e0cfc1e0-d3ce-466b-b5db-889efea72e81-kube-api-access-p4b5l\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.188303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0bcd63-78a1-4f34-b025-187331eb158d-config\") pod \"dnsmasq-dns-675f4bcbfc-v59fl\" (UID: \"6c0bcd63-78a1-4f34-b025-187331eb158d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.188397 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.188455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j72x7\" (UniqueName: \"kubernetes.io/projected/6c0bcd63-78a1-4f34-b025-187331eb158d-kube-api-access-j72x7\") pod \"dnsmasq-dns-675f4bcbfc-v59fl\" (UID: \"6c0bcd63-78a1-4f34-b025-187331eb158d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.188477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4b5l\" (UniqueName: \"kubernetes.io/projected/e0cfc1e0-d3ce-466b-b5db-889efea72e81-kube-api-access-p4b5l\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.188501 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-config\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.189337 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0bcd63-78a1-4f34-b025-187331eb158d-config\") pod \"dnsmasq-dns-675f4bcbfc-v59fl\" (UID: \"6c0bcd63-78a1-4f34-b025-187331eb158d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.189502 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-config\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.189655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.213036 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j72x7\" (UniqueName: \"kubernetes.io/projected/6c0bcd63-78a1-4f34-b025-187331eb158d-kube-api-access-j72x7\") pod \"dnsmasq-dns-675f4bcbfc-v59fl\" (UID: \"6c0bcd63-78a1-4f34-b025-187331eb158d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.215225 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4b5l\" (UniqueName: \"kubernetes.io/projected/e0cfc1e0-d3ce-466b-b5db-889efea72e81-kube-api-access-p4b5l\") pod \"dnsmasq-dns-78dd6ddcc-lmpcd\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.264702 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.305393 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.739857 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v59fl"] Dec 02 10:33:40 crc kubenswrapper[4813]: W1202 10:33:40.749413 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c0bcd63_78a1_4f34_b025_187331eb158d.slice/crio-17f75c2b884eee088ba05d7ef668a4ff92ef02d37a5a5ff3ef80a985f9754b92 WatchSource:0}: Error finding container 17f75c2b884eee088ba05d7ef668a4ff92ef02d37a5a5ff3ef80a985f9754b92: Status 404 returned error can't find the container with id 17f75c2b884eee088ba05d7ef668a4ff92ef02d37a5a5ff3ef80a985f9754b92 Dec 02 10:33:40 crc kubenswrapper[4813]: I1202 10:33:40.810697 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lmpcd"] Dec 02 10:33:40 crc kubenswrapper[4813]: W1202 10:33:40.815250 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0cfc1e0_d3ce_466b_b5db_889efea72e81.slice/crio-1cd99698dc7a4200c0bf87e540237d4b6d0a5563511cdf1d9edccb28ff145732 WatchSource:0}: Error finding container 1cd99698dc7a4200c0bf87e540237d4b6d0a5563511cdf1d9edccb28ff145732: Status 404 returned error can't find the container with id 1cd99698dc7a4200c0bf87e540237d4b6d0a5563511cdf1d9edccb28ff145732 Dec 02 10:33:41 crc kubenswrapper[4813]: I1202 10:33:41.718138 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" event={"ID":"6c0bcd63-78a1-4f34-b025-187331eb158d","Type":"ContainerStarted","Data":"17f75c2b884eee088ba05d7ef668a4ff92ef02d37a5a5ff3ef80a985f9754b92"} Dec 02 10:33:41 crc kubenswrapper[4813]: I1202 10:33:41.719973 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" event={"ID":"e0cfc1e0-d3ce-466b-b5db-889efea72e81","Type":"ContainerStarted","Data":"1cd99698dc7a4200c0bf87e540237d4b6d0a5563511cdf1d9edccb28ff145732"} Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.148063 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v59fl"] Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.178377 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lz8hq"] Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.179661 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.193998 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lz8hq"] Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.340010 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl5tw\" (UniqueName: \"kubernetes.io/projected/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-kube-api-access-rl5tw\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.340891 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.340982 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-config\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.444626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl5tw\" (UniqueName: \"kubernetes.io/projected/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-kube-api-access-rl5tw\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.444689 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.444734 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-config\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.445858 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-config\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.446150 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.479197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl5tw\" (UniqueName: \"kubernetes.io/projected/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-kube-api-access-rl5tw\") pod \"dnsmasq-dns-666b6646f7-lz8hq\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.485952 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lmpcd"] Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.512850 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.513778 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wknc6"] Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.515052 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.534214 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wknc6"] Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.647906 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.648174 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-config\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.648229 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8gz6\" (UniqueName: \"kubernetes.io/projected/020a3922-2d98-4790-81af-81c2f00f5389-kube-api-access-x8gz6\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.749659 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-config\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.749723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8gz6\" (UniqueName: \"kubernetes.io/projected/020a3922-2d98-4790-81af-81c2f00f5389-kube-api-access-x8gz6\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.749777 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.751213 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.751822 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-config\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.784992 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8gz6\" (UniqueName: \"kubernetes.io/projected/020a3922-2d98-4790-81af-81c2f00f5389-kube-api-access-x8gz6\") pod \"dnsmasq-dns-57d769cc4f-wknc6\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:43 crc kubenswrapper[4813]: I1202 10:33:43.834925 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.137259 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lz8hq"] Dec 02 10:33:44 crc kubenswrapper[4813]: W1202 10:33:44.150471 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29c09d6b_89ce_458f_9848_f8f3e7e3c7ee.slice/crio-c7794ec9e0705d474b081194abd34ab815880689cee6fc4b9f3989229d91fdaf WatchSource:0}: Error finding container c7794ec9e0705d474b081194abd34ab815880689cee6fc4b9f3989229d91fdaf: Status 404 returned error can't find the container with id c7794ec9e0705d474b081194abd34ab815880689cee6fc4b9f3989229d91fdaf Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.337494 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.338966 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.347943 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.349050 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.349227 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qhxq6" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.349430 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.349472 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.349641 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.349652 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.376773 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.386308 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wknc6"] Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.466536 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.466598 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/250ea07a-903e-418f-adf4-0e720a9807f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.466624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.466822 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.466894 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.466985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.467042 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8t8\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-kube-api-access-rs8t8\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.467173 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/250ea07a-903e-418f-adf4-0e720a9807f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.467211 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.467277 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.467546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/250ea07a-903e-418f-adf4-0e720a9807f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569425 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569453 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569491 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569515 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8t8\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-kube-api-access-rs8t8\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/250ea07a-903e-418f-adf4-0e720a9807f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569559 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569586 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569605 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.569628 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.570604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.570702 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.570777 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.571246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.571699 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.572129 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.577710 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/250ea07a-903e-418f-adf4-0e720a9807f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.579123 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.579719 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.579893 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/250ea07a-903e-418f-adf4-0e720a9807f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.590369 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8t8\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-kube-api-access-rs8t8\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.597089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.679304 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.681248 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.683097 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.686759 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.689692 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.690121 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.690444 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xbdbr" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.690592 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.690729 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.699614 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.700656 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.754221 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" event={"ID":"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee","Type":"ContainerStarted","Data":"c7794ec9e0705d474b081194abd34ab815880689cee6fc4b9f3989229d91fdaf"} Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.772654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26kst\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-kube-api-access-26kst\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.772734 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.772779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.772843 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.772875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.772906 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.772950 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.772978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.773006 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.773029 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.773058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876203 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876263 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26kst\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-kube-api-access-26kst\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876291 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876436 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.876510 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.877435 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.882406 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.883607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.886908 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.887517 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.888565 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.889542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.892131 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.895057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.904618 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.912395 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26kst\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-kube-api-access-26kst\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:44 crc kubenswrapper[4813]: I1202 10:33:44.930917 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:45 crc kubenswrapper[4813]: I1202 10:33:45.020834 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.165160 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.167256 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.174746 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-sn2v2" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.175016 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.177689 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.179135 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.180310 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.184061 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.301257 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-kolla-config\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.301348 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/53e726ce-4b04-4f80-b0a6-20919949a0e6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.301378 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.301398 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dkl\" (UniqueName: \"kubernetes.io/projected/53e726ce-4b04-4f80-b0a6-20919949a0e6-kube-api-access-b5dkl\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.301436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e726ce-4b04-4f80-b0a6-20919949a0e6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.301511 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-config-data-default\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.301529 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.301543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e726ce-4b04-4f80-b0a6-20919949a0e6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.402993 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e726ce-4b04-4f80-b0a6-20919949a0e6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.403062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-config-data-default\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.403108 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.403126 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e726ce-4b04-4f80-b0a6-20919949a0e6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.403192 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-kolla-config\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.403217 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/53e726ce-4b04-4f80-b0a6-20919949a0e6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.403242 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.403268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5dkl\" (UniqueName: \"kubernetes.io/projected/53e726ce-4b04-4f80-b0a6-20919949a0e6-kube-api-access-b5dkl\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.405436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-kolla-config\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.405755 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/53e726ce-4b04-4f80-b0a6-20919949a0e6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.405905 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.405925 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-config-data-default\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.406257 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53e726ce-4b04-4f80-b0a6-20919949a0e6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.421891 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e726ce-4b04-4f80-b0a6-20919949a0e6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.422628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5dkl\" (UniqueName: \"kubernetes.io/projected/53e726ce-4b04-4f80-b0a6-20919949a0e6-kube-api-access-b5dkl\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.424678 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e726ce-4b04-4f80-b0a6-20919949a0e6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.430417 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"53e726ce-4b04-4f80-b0a6-20919949a0e6\") " pod="openstack/openstack-galera-0" Dec 02 10:33:46 crc kubenswrapper[4813]: I1202 10:33:46.499936 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.067929 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:33:47 crc kubenswrapper[4813]: E1202 10:33:47.068281 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:33:47 crc kubenswrapper[4813]: W1202 10:33:47.282924 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod020a3922_2d98_4790_81af_81c2f00f5389.slice/crio-bcf082fda2d8cbcc6b12dd444aa1c6e4ef6551bfceafbe78c5010c00d478b368 WatchSource:0}: Error finding container bcf082fda2d8cbcc6b12dd444aa1c6e4ef6551bfceafbe78c5010c00d478b368: Status 404 returned error can't find the container with id bcf082fda2d8cbcc6b12dd444aa1c6e4ef6551bfceafbe78c5010c00d478b368 Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.414055 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.415485 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.417803 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.418033 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wdrzx" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.418215 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.418495 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.432410 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.521263 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.521344 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5717348-ff61-4e62-9c41-3553228842f9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.521374 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.521417 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.521439 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c5717348-ff61-4e62-9c41-3553228842f9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.521459 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5717348-ff61-4e62-9c41-3553228842f9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.521495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.521565 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvld\" (UniqueName: \"kubernetes.io/projected/c5717348-ff61-4e62-9c41-3553228842f9-kube-api-access-9fvld\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.622961 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.623036 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c5717348-ff61-4e62-9c41-3553228842f9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.623089 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5717348-ff61-4e62-9c41-3553228842f9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.623148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.623193 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvld\" (UniqueName: \"kubernetes.io/projected/c5717348-ff61-4e62-9c41-3553228842f9-kube-api-access-9fvld\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.623229 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.623269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5717348-ff61-4e62-9c41-3553228842f9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.623313 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.623359 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.624732 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c5717348-ff61-4e62-9c41-3553228842f9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.625170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.625424 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.625500 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5717348-ff61-4e62-9c41-3553228842f9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.648631 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.651407 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvld\" (UniqueName: \"kubernetes.io/projected/c5717348-ff61-4e62-9c41-3553228842f9-kube-api-access-9fvld\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.651670 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5717348-ff61-4e62-9c41-3553228842f9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.653556 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5717348-ff61-4e62-9c41-3553228842f9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c5717348-ff61-4e62-9c41-3553228842f9\") " pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.744606 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.780300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" event={"ID":"020a3922-2d98-4790-81af-81c2f00f5389","Type":"ContainerStarted","Data":"bcf082fda2d8cbcc6b12dd444aa1c6e4ef6551bfceafbe78c5010c00d478b368"} Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.795375 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.796308 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.798148 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.798186 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sgz48" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.808902 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.810467 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.928622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m8rt\" (UniqueName: \"kubernetes.io/projected/481fa78c-0062-4dc4-b7a6-c8f5845c5480-kube-api-access-2m8rt\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.928717 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481fa78c-0062-4dc4-b7a6-c8f5845c5480-kolla-config\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.928752 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481fa78c-0062-4dc4-b7a6-c8f5845c5480-combined-ca-bundle\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.928826 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/481fa78c-0062-4dc4-b7a6-c8f5845c5480-memcached-tls-certs\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:47 crc kubenswrapper[4813]: I1202 10:33:47.928847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/481fa78c-0062-4dc4-b7a6-c8f5845c5480-config-data\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.031116 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/481fa78c-0062-4dc4-b7a6-c8f5845c5480-memcached-tls-certs\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.031192 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/481fa78c-0062-4dc4-b7a6-c8f5845c5480-config-data\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.031264 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m8rt\" (UniqueName: \"kubernetes.io/projected/481fa78c-0062-4dc4-b7a6-c8f5845c5480-kube-api-access-2m8rt\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.031314 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481fa78c-0062-4dc4-b7a6-c8f5845c5480-kolla-config\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.031355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481fa78c-0062-4dc4-b7a6-c8f5845c5480-combined-ca-bundle\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.032555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481fa78c-0062-4dc4-b7a6-c8f5845c5480-kolla-config\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.032582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/481fa78c-0062-4dc4-b7a6-c8f5845c5480-config-data\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.035927 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/481fa78c-0062-4dc4-b7a6-c8f5845c5480-memcached-tls-certs\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.037869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481fa78c-0062-4dc4-b7a6-c8f5845c5480-combined-ca-bundle\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.057570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m8rt\" (UniqueName: \"kubernetes.io/projected/481fa78c-0062-4dc4-b7a6-c8f5845c5480-kube-api-access-2m8rt\") pod \"memcached-0\" (UID: \"481fa78c-0062-4dc4-b7a6-c8f5845c5480\") " pod="openstack/memcached-0" Dec 02 10:33:48 crc kubenswrapper[4813]: I1202 10:33:48.122773 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 10:33:50 crc kubenswrapper[4813]: I1202 10:33:50.530909 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:33:50 crc kubenswrapper[4813]: I1202 10:33:50.532265 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:33:50 crc kubenswrapper[4813]: I1202 10:33:50.550181 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:33:50 crc kubenswrapper[4813]: I1202 10:33:50.550292 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zqws5" Dec 02 10:33:50 crc kubenswrapper[4813]: I1202 10:33:50.573696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6b9v\" (UniqueName: \"kubernetes.io/projected/aeb0e843-c886-42eb-844c-a544d47c8c94-kube-api-access-k6b9v\") pod \"kube-state-metrics-0\" (UID: \"aeb0e843-c886-42eb-844c-a544d47c8c94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:33:50 crc kubenswrapper[4813]: I1202 10:33:50.674764 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6b9v\" (UniqueName: \"kubernetes.io/projected/aeb0e843-c886-42eb-844c-a544d47c8c94-kube-api-access-k6b9v\") pod \"kube-state-metrics-0\" (UID: \"aeb0e843-c886-42eb-844c-a544d47c8c94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:33:50 crc kubenswrapper[4813]: I1202 10:33:50.697240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6b9v\" (UniqueName: \"kubernetes.io/projected/aeb0e843-c886-42eb-844c-a544d47c8c94-kube-api-access-k6b9v\") pod \"kube-state-metrics-0\" (UID: \"aeb0e843-c886-42eb-844c-a544d47c8c94\") " pod="openstack/kube-state-metrics-0" Dec 02 10:33:50 crc kubenswrapper[4813]: I1202 10:33:50.853670 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.251823 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hfxg4"] Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.253712 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.256228 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-n7xf2" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.256550 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.257656 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.270933 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hfxg4"] Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.287253 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zmgkl"] Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.289006 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.313053 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zmgkl"] Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.330668 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w9lw\" (UniqueName: \"kubernetes.io/projected/0ce6e9c3-8bfa-4bea-8b33-497328af7573-kube-api-access-4w9lw\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.330712 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce6e9c3-8bfa-4bea-8b33-497328af7573-combined-ca-bundle\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.330920 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-log\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce6e9c3-8bfa-4bea-8b33-497328af7573-ovn-controller-tls-certs\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-etc-ovs\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331238 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-run\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331275 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-lib\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331324 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jhtf\" (UniqueName: \"kubernetes.io/projected/2dbe4376-1955-47b0-9d67-0d2188ef1532-kube-api-access-2jhtf\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331393 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-log-ovn\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331424 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbe4376-1955-47b0-9d67-0d2188ef1532-scripts\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331486 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ce6e9c3-8bfa-4bea-8b33-497328af7573-scripts\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-run\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.331634 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-run-ovn\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433111 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jhtf\" (UniqueName: \"kubernetes.io/projected/2dbe4376-1955-47b0-9d67-0d2188ef1532-kube-api-access-2jhtf\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433183 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-log-ovn\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433223 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbe4376-1955-47b0-9d67-0d2188ef1532-scripts\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ce6e9c3-8bfa-4bea-8b33-497328af7573-scripts\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433285 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-run\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433347 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-run-ovn\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433371 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w9lw\" (UniqueName: \"kubernetes.io/projected/0ce6e9c3-8bfa-4bea-8b33-497328af7573-kube-api-access-4w9lw\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433387 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce6e9c3-8bfa-4bea-8b33-497328af7573-combined-ca-bundle\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433405 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-log\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433432 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce6e9c3-8bfa-4bea-8b33-497328af7573-ovn-controller-tls-certs\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433454 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-etc-ovs\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-run\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.433492 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-lib\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.435667 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ce6e9c3-8bfa-4bea-8b33-497328af7573-scripts\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.435676 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbe4376-1955-47b0-9d67-0d2188ef1532-scripts\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.439361 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce6e9c3-8bfa-4bea-8b33-497328af7573-combined-ca-bundle\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.439534 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce6e9c3-8bfa-4bea-8b33-497328af7573-ovn-controller-tls-certs\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.448952 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-log-ovn\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.450883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-lib\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.450913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-etc-ovs\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.451031 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-log\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.451031 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-run-ovn\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.451089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ce6e9c3-8bfa-4bea-8b33-497328af7573-var-run\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.451098 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbe4376-1955-47b0-9d67-0d2188ef1532-var-run\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.467787 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w9lw\" (UniqueName: \"kubernetes.io/projected/0ce6e9c3-8bfa-4bea-8b33-497328af7573-kube-api-access-4w9lw\") pod \"ovn-controller-hfxg4\" (UID: \"0ce6e9c3-8bfa-4bea-8b33-497328af7573\") " pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.476842 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jhtf\" (UniqueName: \"kubernetes.io/projected/2dbe4376-1955-47b0-9d67-0d2188ef1532-kube-api-access-2jhtf\") pod \"ovn-controller-ovs-zmgkl\" (UID: \"2dbe4376-1955-47b0-9d67-0d2188ef1532\") " pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.583537 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4" Dec 02 10:33:54 crc kubenswrapper[4813]: I1202 10:33:54.616681 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.129580 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.131195 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.133504 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.133705 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.134060 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.134845 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-ndbsf" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.134980 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.138772 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.247769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.247813 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79685aab-e537-450f-aecc-0768e316bf66-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.247838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.247860 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5pc\" (UniqueName: \"kubernetes.io/projected/79685aab-e537-450f-aecc-0768e316bf66-kube-api-access-ql5pc\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.247884 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.247922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79685aab-e537-450f-aecc-0768e316bf66-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.247936 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79685aab-e537-450f-aecc-0768e316bf66-config\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.247976 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.351756 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.352292 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79685aab-e537-450f-aecc-0768e316bf66-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.352346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.352375 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql5pc\" (UniqueName: \"kubernetes.io/projected/79685aab-e537-450f-aecc-0768e316bf66-kube-api-access-ql5pc\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.352404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.352459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79685aab-e537-450f-aecc-0768e316bf66-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.352476 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79685aab-e537-450f-aecc-0768e316bf66-config\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.352541 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.353298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79685aab-e537-450f-aecc-0768e316bf66-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.353571 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79685aab-e537-450f-aecc-0768e316bf66-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.353569 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.353949 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79685aab-e537-450f-aecc-0768e316bf66-config\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.357529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.358702 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.359428 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79685aab-e537-450f-aecc-0768e316bf66-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.372315 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql5pc\" (UniqueName: \"kubernetes.io/projected/79685aab-e537-450f-aecc-0768e316bf66-kube-api-access-ql5pc\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.376159 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"79685aab-e537-450f-aecc-0768e316bf66\") " pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:55 crc kubenswrapper[4813]: I1202 10:33:55.464373 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.067761 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:33:58 crc kubenswrapper[4813]: E1202 10:33:58.068014 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.427485 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.429315 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.431197 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.431336 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-r4j9r" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.431739 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.431902 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.439448 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.500228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71370d19-e630-47a0-a25e-8815ab28d976-config\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.500299 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.500395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71370d19-e630-47a0-a25e-8815ab28d976-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.500505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.500535 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.500575 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdpp\" (UniqueName: \"kubernetes.io/projected/71370d19-e630-47a0-a25e-8815ab28d976-kube-api-access-vcdpp\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.500621 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.500658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71370d19-e630-47a0-a25e-8815ab28d976-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602279 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdpp\" (UniqueName: \"kubernetes.io/projected/71370d19-e630-47a0-a25e-8815ab28d976-kube-api-access-vcdpp\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602392 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71370d19-e630-47a0-a25e-8815ab28d976-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71370d19-e630-47a0-a25e-8815ab28d976-config\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602469 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71370d19-e630-47a0-a25e-8815ab28d976-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602593 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.602942 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.603332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71370d19-e630-47a0-a25e-8815ab28d976-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.604150 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71370d19-e630-47a0-a25e-8815ab28d976-config\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.604412 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71370d19-e630-47a0-a25e-8815ab28d976-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.610665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.610683 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.612348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71370d19-e630-47a0-a25e-8815ab28d976-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.620658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdpp\" (UniqueName: \"kubernetes.io/projected/71370d19-e630-47a0-a25e-8815ab28d976-kube-api-access-vcdpp\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.624443 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"71370d19-e630-47a0-a25e-8815ab28d976\") " pod="openstack/ovsdbserver-sb-0" Dec 02 10:33:58 crc kubenswrapper[4813]: I1202 10:33:58.762425 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.681666 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.683889 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl5tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-lz8hq_openstack(29c09d6b-89ce-458f-9848-f8f3e7e3c7ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.685759 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" podUID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.713601 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.713796 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4b5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-lmpcd_openstack(e0cfc1e0-d3ce-466b-b5db-889efea72e81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.714971 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" podUID="e0cfc1e0-d3ce-466b-b5db-889efea72e81" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.795525 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.796056 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j72x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-v59fl_openstack(6c0bcd63-78a1-4f34-b025-187331eb158d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:34:00 crc kubenswrapper[4813]: E1202 10:34:00.797585 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" podUID="6c0bcd63-78a1-4f34-b025-187331eb158d" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.139883 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.612756 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.620415 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.641677 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.648430 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:34:01 crc kubenswrapper[4813]: W1202 10:34:01.649452 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5717348_ff61_4e62_9c41_3553228842f9.slice/crio-667a85e9155c75848e68755500bfafbbb4aded73cfd70f2f82bba586984e311a WatchSource:0}: Error finding container 667a85e9155c75848e68755500bfafbbb4aded73cfd70f2f82bba586984e311a: Status 404 returned error can't find the container with id 667a85e9155c75848e68755500bfafbbb4aded73cfd70f2f82bba586984e311a Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.675312 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.686866 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.758311 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j72x7\" (UniqueName: \"kubernetes.io/projected/6c0bcd63-78a1-4f34-b025-187331eb158d-kube-api-access-j72x7\") pod \"6c0bcd63-78a1-4f34-b025-187331eb158d\" (UID: \"6c0bcd63-78a1-4f34-b025-187331eb158d\") " Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.758466 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-dns-svc\") pod \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.758494 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0bcd63-78a1-4f34-b025-187331eb158d-config\") pod \"6c0bcd63-78a1-4f34-b025-187331eb158d\" (UID: \"6c0bcd63-78a1-4f34-b025-187331eb158d\") " Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.758531 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-config\") pod \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.758554 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4b5l\" (UniqueName: \"kubernetes.io/projected/e0cfc1e0-d3ce-466b-b5db-889efea72e81-kube-api-access-p4b5l\") pod \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\" (UID: \"e0cfc1e0-d3ce-466b-b5db-889efea72e81\") " Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.760404 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0cfc1e0-d3ce-466b-b5db-889efea72e81" (UID: "e0cfc1e0-d3ce-466b-b5db-889efea72e81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.760446 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-config" (OuterVolumeSpecName: "config") pod "e0cfc1e0-d3ce-466b-b5db-889efea72e81" (UID: "e0cfc1e0-d3ce-466b-b5db-889efea72e81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.760464 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0bcd63-78a1-4f34-b025-187331eb158d-config" (OuterVolumeSpecName: "config") pod "6c0bcd63-78a1-4f34-b025-187331eb158d" (UID: "6c0bcd63-78a1-4f34-b025-187331eb158d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.764344 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cfc1e0-d3ce-466b-b5db-889efea72e81-kube-api-access-p4b5l" (OuterVolumeSpecName: "kube-api-access-p4b5l") pod "e0cfc1e0-d3ce-466b-b5db-889efea72e81" (UID: "e0cfc1e0-d3ce-466b-b5db-889efea72e81"). InnerVolumeSpecName "kube-api-access-p4b5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.764883 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0bcd63-78a1-4f34-b025-187331eb158d-kube-api-access-j72x7" (OuterVolumeSpecName: "kube-api-access-j72x7") pod "6c0bcd63-78a1-4f34-b025-187331eb158d" (UID: "6c0bcd63-78a1-4f34-b025-187331eb158d"). InnerVolumeSpecName "kube-api-access-j72x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:01 crc kubenswrapper[4813]: W1202 10:34:01.834927 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaeb0e843_c886_42eb_844c_a544d47c8c94.slice/crio-4dd220f75990b613d47856437e9d4266f4b4f593f873a4fa70cbbff199d5196f WatchSource:0}: Error finding container 4dd220f75990b613d47856437e9d4266f4b4f593f873a4fa70cbbff199d5196f: Status 404 returned error can't find the container with id 4dd220f75990b613d47856437e9d4266f4b4f593f873a4fa70cbbff199d5196f Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.835020 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.849315 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hfxg4"] Dec 02 10:34:01 crc kubenswrapper[4813]: W1202 10:34:01.850586 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ce6e9c3_8bfa_4bea_8b33_497328af7573.slice/crio-102ebcaca6a682dba4a51823880e929c9308e2baabbca98f3895028f197ec0a5 WatchSource:0}: Error finding container 102ebcaca6a682dba4a51823880e929c9308e2baabbca98f3895028f197ec0a5: Status 404 returned error can't find the container with id 102ebcaca6a682dba4a51823880e929c9308e2baabbca98f3895028f197ec0a5 Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.859992 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.860026 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4b5l\" (UniqueName: \"kubernetes.io/projected/e0cfc1e0-d3ce-466b-b5db-889efea72e81-kube-api-access-p4b5l\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.860041 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j72x7\" (UniqueName: \"kubernetes.io/projected/6c0bcd63-78a1-4f34-b025-187331eb158d-kube-api-access-j72x7\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.860054 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0cfc1e0-d3ce-466b-b5db-889efea72e81-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.860083 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0bcd63-78a1-4f34-b025-187331eb158d-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.906163 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"250ea07a-903e-418f-adf4-0e720a9807f6","Type":"ContainerStarted","Data":"8107a6ecf06831ebcc5300f2e50edc8ab9421f203e433d441e787a883664fec4"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.916583 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"53e726ce-4b04-4f80-b0a6-20919949a0e6","Type":"ContainerStarted","Data":"0314cbec24bc678a4c5402bdc9bc48e0f3935e6c243d7d2ef44012bcc86ec9c4"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.922403 4813 generic.go:334] "Generic (PLEG): container finished" podID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" containerID="fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5" exitCode=0 Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.922489 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" event={"ID":"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee","Type":"ContainerDied","Data":"fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.925244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"481fa78c-0062-4dc4-b7a6-c8f5845c5480","Type":"ContainerStarted","Data":"80b4af9d50a4b20b6f0506535e46177eee87f8b7b10a1361782614771e908e1f"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.928189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c5717348-ff61-4e62-9c41-3553228842f9","Type":"ContainerStarted","Data":"667a85e9155c75848e68755500bfafbbb4aded73cfd70f2f82bba586984e311a"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.930282 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hfxg4" event={"ID":"0ce6e9c3-8bfa-4bea-8b33-497328af7573","Type":"ContainerStarted","Data":"102ebcaca6a682dba4a51823880e929c9308e2baabbca98f3895028f197ec0a5"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.931391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa","Type":"ContainerStarted","Data":"865846d0f1a546f24cb4dff38abe7970c5549ce7d36ec884be3c6b8e8141070b"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.932810 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aeb0e843-c886-42eb-844c-a544d47c8c94","Type":"ContainerStarted","Data":"4dd220f75990b613d47856437e9d4266f4b4f593f873a4fa70cbbff199d5196f"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.933886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" event={"ID":"e0cfc1e0-d3ce-466b-b5db-889efea72e81","Type":"ContainerDied","Data":"1cd99698dc7a4200c0bf87e540237d4b6d0a5563511cdf1d9edccb28ff145732"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.933947 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lmpcd" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.937001 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.937022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-v59fl" event={"ID":"6c0bcd63-78a1-4f34-b025-187331eb158d","Type":"ContainerDied","Data":"17f75c2b884eee088ba05d7ef668a4ff92ef02d37a5a5ff3ef80a985f9754b92"} Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.938504 4813 generic.go:334] "Generic (PLEG): container finished" podID="020a3922-2d98-4790-81af-81c2f00f5389" containerID="e7fa24156ec5aeba609f8760de4e9960057c234b145cd1116dec945ebf203b5d" exitCode=0 Dec 02 10:34:01 crc kubenswrapper[4813]: I1202 10:34:01.938532 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" event={"ID":"020a3922-2d98-4790-81af-81c2f00f5389","Type":"ContainerDied","Data":"e7fa24156ec5aeba609f8760de4e9960057c234b145cd1116dec945ebf203b5d"} Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.027785 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lmpcd"] Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.034291 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lmpcd"] Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.057379 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.081373 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cfc1e0-d3ce-466b-b5db-889efea72e81" path="/var/lib/kubelet/pods/e0cfc1e0-d3ce-466b-b5db-889efea72e81/volumes" Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.081705 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v59fl"] Dec 02 10:34:02 crc kubenswrapper[4813]: W1202 10:34:02.085500 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79685aab_e537_450f_aecc_0768e316bf66.slice/crio-92ce63a3bb065867fb584b47495553e8c11f4703f166d82dec7d1ee0941b4c6c WatchSource:0}: Error finding container 92ce63a3bb065867fb584b47495553e8c11f4703f166d82dec7d1ee0941b4c6c: Status 404 returned error can't find the container with id 92ce63a3bb065867fb584b47495553e8c11f4703f166d82dec7d1ee0941b4c6c Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.086014 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-v59fl"] Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.168510 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zmgkl"] Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.818840 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.950228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" event={"ID":"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee","Type":"ContainerStarted","Data":"ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb"} Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.950502 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.952095 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79685aab-e537-450f-aecc-0768e316bf66","Type":"ContainerStarted","Data":"92ce63a3bb065867fb584b47495553e8c11f4703f166d82dec7d1ee0941b4c6c"} Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.955346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" event={"ID":"020a3922-2d98-4790-81af-81c2f00f5389","Type":"ContainerStarted","Data":"998b2164ba86e3ab852a292ff53c415deffe34d68b581c3756096bfbceaab572"} Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.955467 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.957962 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zmgkl" event={"ID":"2dbe4376-1955-47b0-9d67-0d2188ef1532","Type":"ContainerStarted","Data":"9a283521389f48de5f74193e6f813a3652b862a0f51553fc5c19795e0234ab9a"} Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.971561 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" podStartSLOduration=-9223372016.883236 podStartE2EDuration="19.971539406s" podCreationTimestamp="2025-12-02 10:33:43 +0000 UTC" firstStartedPulling="2025-12-02 10:33:44.153679916 +0000 UTC m=+1548.348854218" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:02.965779142 +0000 UTC m=+1567.160953464" watchObservedRunningTime="2025-12-02 10:34:02.971539406 +0000 UTC m=+1567.166713718" Dec 02 10:34:02 crc kubenswrapper[4813]: I1202 10:34:02.990732 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" podStartSLOduration=6.397633687 podStartE2EDuration="19.990708829s" podCreationTimestamp="2025-12-02 10:33:43 +0000 UTC" firstStartedPulling="2025-12-02 10:33:47.285658784 +0000 UTC m=+1551.480833086" lastFinishedPulling="2025-12-02 10:34:00.878733926 +0000 UTC m=+1565.073908228" observedRunningTime="2025-12-02 10:34:02.98438909 +0000 UTC m=+1567.179563412" watchObservedRunningTime="2025-12-02 10:34:02.990708829 +0000 UTC m=+1567.185883131" Dec 02 10:34:04 crc kubenswrapper[4813]: I1202 10:34:04.077273 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0bcd63-78a1-4f34-b025-187331eb158d" path="/var/lib/kubelet/pods/6c0bcd63-78a1-4f34-b025-187331eb158d/volumes" Dec 02 10:34:04 crc kubenswrapper[4813]: W1202 10:34:04.354219 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71370d19_e630_47a0_a25e_8815ab28d976.slice/crio-c76a43375b6eb7ebd654e1694a08c95f8297db4e2694c5e9100c13b2249f7b6f WatchSource:0}: Error finding container c76a43375b6eb7ebd654e1694a08c95f8297db4e2694c5e9100c13b2249f7b6f: Status 404 returned error can't find the container with id c76a43375b6eb7ebd654e1694a08c95f8297db4e2694c5e9100c13b2249f7b6f Dec 02 10:34:04 crc kubenswrapper[4813]: I1202 10:34:04.359702 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:34:04 crc kubenswrapper[4813]: I1202 10:34:04.978885 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"71370d19-e630-47a0-a25e-8815ab28d976","Type":"ContainerStarted","Data":"c76a43375b6eb7ebd654e1694a08c95f8297db4e2694c5e9100c13b2249f7b6f"} Dec 02 10:34:08 crc kubenswrapper[4813]: I1202 10:34:08.516121 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:34:08 crc kubenswrapper[4813]: I1202 10:34:08.837267 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:34:08 crc kubenswrapper[4813]: I1202 10:34:08.897357 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lz8hq"] Dec 02 10:34:09 crc kubenswrapper[4813]: I1202 10:34:09.011749 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" podUID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" containerName="dnsmasq-dns" containerID="cri-o://ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb" gracePeriod=10 Dec 02 10:34:09 crc kubenswrapper[4813]: I1202 10:34:09.068133 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:34:09 crc kubenswrapper[4813]: E1202 10:34:09.068472 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.002161 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.019975 4813 generic.go:334] "Generic (PLEG): container finished" podID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" containerID="ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb" exitCode=0 Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.020057 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.020137 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" event={"ID":"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee","Type":"ContainerDied","Data":"ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb"} Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.020173 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lz8hq" event={"ID":"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee","Type":"ContainerDied","Data":"c7794ec9e0705d474b081194abd34ab815880689cee6fc4b9f3989229d91fdaf"} Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.020195 4813 scope.go:117] "RemoveContainer" containerID="ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.024515 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"481fa78c-0062-4dc4-b7a6-c8f5845c5480","Type":"ContainerStarted","Data":"fc4c0d70acb18c676a93c4ee6b31d553c3ac93123bd2f4cce0cfad78f22f1350"} Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.025501 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.059619 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.057139831 podStartE2EDuration="23.059597838s" podCreationTimestamp="2025-12-02 10:33:47 +0000 UTC" firstStartedPulling="2025-12-02 10:34:01.635397665 +0000 UTC m=+1565.830571967" lastFinishedPulling="2025-12-02 10:34:08.637855682 +0000 UTC m=+1572.833029974" observedRunningTime="2025-12-02 10:34:10.055895243 +0000 UTC m=+1574.251069545" watchObservedRunningTime="2025-12-02 10:34:10.059597838 +0000 UTC m=+1574.254772140" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.065694 4813 scope.go:117] "RemoveContainer" containerID="fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.097505 4813 scope.go:117] "RemoveContainer" containerID="ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb" Dec 02 10:34:10 crc kubenswrapper[4813]: E1202 10:34:10.098125 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb\": container with ID starting with ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb not found: ID does not exist" containerID="ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.098189 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb"} err="failed to get container status \"ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb\": rpc error: code = NotFound desc = could not find container \"ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb\": container with ID starting with ddbaf73d712237f0aaac60cb56f82b5d879806368d7825134265bb3c878e09fb not found: ID does not exist" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.098225 4813 scope.go:117] "RemoveContainer" containerID="fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5" Dec 02 10:34:10 crc kubenswrapper[4813]: E1202 10:34:10.098656 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5\": container with ID starting with fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5 not found: ID does not exist" containerID="fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.098682 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5"} err="failed to get container status \"fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5\": rpc error: code = NotFound desc = could not find container \"fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5\": container with ID starting with fcd80af853933b32595bd813286b98350347b16bc74122a6efd4bd32b80440a5 not found: ID does not exist" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.100301 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-dns-svc\") pod \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.100480 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-config\") pod \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.100527 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl5tw\" (UniqueName: \"kubernetes.io/projected/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-kube-api-access-rl5tw\") pod \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\" (UID: \"29c09d6b-89ce-458f-9848-f8f3e7e3c7ee\") " Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.108720 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-kube-api-access-rl5tw" (OuterVolumeSpecName: "kube-api-access-rl5tw") pod "29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" (UID: "29c09d6b-89ce-458f-9848-f8f3e7e3c7ee"). InnerVolumeSpecName "kube-api-access-rl5tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.203047 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl5tw\" (UniqueName: \"kubernetes.io/projected/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-kube-api-access-rl5tw\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.224835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" (UID: "29c09d6b-89ce-458f-9848-f8f3e7e3c7ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.305969 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.517474 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-config" (OuterVolumeSpecName: "config") pod "29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" (UID: "29c09d6b-89ce-458f-9848-f8f3e7e3c7ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.610483 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.805174 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lz8hq"] Dec 02 10:34:10 crc kubenswrapper[4813]: I1202 10:34:10.811231 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lz8hq"] Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.032508 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"53e726ce-4b04-4f80-b0a6-20919949a0e6","Type":"ContainerStarted","Data":"67173c36ace04a93060cb6e49d7083005ace4a985fa191b3f7753bbd17cfe1fa"} Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.034779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"71370d19-e630-47a0-a25e-8815ab28d976","Type":"ContainerStarted","Data":"b9d503dd2b121bf961b538384c80c5e4206abfa95cc1647d2489ffe2d52292b3"} Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.036021 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa","Type":"ContainerStarted","Data":"6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5"} Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.037735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aeb0e843-c886-42eb-844c-a544d47c8c94","Type":"ContainerStarted","Data":"2c5b071e407282189c6ae92d2d2471cb9a73bea0bb6cd4caff1e9189fa9e6e02"} Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.037791 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.039869 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zmgkl" event={"ID":"2dbe4376-1955-47b0-9d67-0d2188ef1532","Type":"ContainerStarted","Data":"999f86e81134c632abf1f09e4a066b79532d289ba4415c7a7c54208b4ec87bea"} Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.041579 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c5717348-ff61-4e62-9c41-3553228842f9","Type":"ContainerStarted","Data":"73f28fca83c35096b13f4202edd3be9f5a946869039d203736681d1b08eb3614"} Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.043366 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79685aab-e537-450f-aecc-0768e316bf66","Type":"ContainerStarted","Data":"325ce160a21e1ebfac205744cbdbe0bef7048c2ed9cff34ea9c1f5fd661cdf0b"} Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.045827 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hfxg4" event={"ID":"0ce6e9c3-8bfa-4bea-8b33-497328af7573","Type":"ContainerStarted","Data":"540c1106f03b3636ff6433dcc2ca4c100537605fc25b814cad7590693c1e6233"} Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.046089 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hfxg4" Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.100561 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.340707777 podStartE2EDuration="21.100537904s" podCreationTimestamp="2025-12-02 10:33:50 +0000 UTC" firstStartedPulling="2025-12-02 10:34:01.837579634 +0000 UTC m=+1566.032753936" lastFinishedPulling="2025-12-02 10:34:09.597409761 +0000 UTC m=+1573.792584063" observedRunningTime="2025-12-02 10:34:11.093947767 +0000 UTC m=+1575.289122089" watchObservedRunningTime="2025-12-02 10:34:11.100537904 +0000 UTC m=+1575.295712206" Dec 02 10:34:11 crc kubenswrapper[4813]: I1202 10:34:11.172160 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hfxg4" podStartSLOduration=10.385964345 podStartE2EDuration="17.172143613s" podCreationTimestamp="2025-12-02 10:33:54 +0000 UTC" firstStartedPulling="2025-12-02 10:34:01.852790105 +0000 UTC m=+1566.047964407" lastFinishedPulling="2025-12-02 10:34:08.638969373 +0000 UTC m=+1572.834143675" observedRunningTime="2025-12-02 10:34:11.150625943 +0000 UTC m=+1575.345800275" watchObservedRunningTime="2025-12-02 10:34:11.172143613 +0000 UTC m=+1575.367317915" Dec 02 10:34:12 crc kubenswrapper[4813]: I1202 10:34:12.058343 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"250ea07a-903e-418f-adf4-0e720a9807f6","Type":"ContainerStarted","Data":"bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79"} Dec 02 10:34:12 crc kubenswrapper[4813]: I1202 10:34:12.062790 4813 generic.go:334] "Generic (PLEG): container finished" podID="2dbe4376-1955-47b0-9d67-0d2188ef1532" containerID="999f86e81134c632abf1f09e4a066b79532d289ba4415c7a7c54208b4ec87bea" exitCode=0 Dec 02 10:34:12 crc kubenswrapper[4813]: I1202 10:34:12.062841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zmgkl" event={"ID":"2dbe4376-1955-47b0-9d67-0d2188ef1532","Type":"ContainerDied","Data":"999f86e81134c632abf1f09e4a066b79532d289ba4415c7a7c54208b4ec87bea"} Dec 02 10:34:12 crc kubenswrapper[4813]: I1202 10:34:12.076863 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" path="/var/lib/kubelet/pods/29c09d6b-89ce-458f-9848-f8f3e7e3c7ee/volumes" Dec 02 10:34:13 crc kubenswrapper[4813]: I1202 10:34:13.078983 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zmgkl" event={"ID":"2dbe4376-1955-47b0-9d67-0d2188ef1532","Type":"ContainerStarted","Data":"11d07c6d7e7671d2c558ffa7d3b33169f8e3133cc3ce52b3f760d4967e825c77"} Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.088257 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"71370d19-e630-47a0-a25e-8815ab28d976","Type":"ContainerStarted","Data":"815363ccd9a22416dd03b83540c46edc198fadc210afc9f4a4933d574eb8c0eb"} Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.090161 4813 generic.go:334] "Generic (PLEG): container finished" podID="c5717348-ff61-4e62-9c41-3553228842f9" containerID="73f28fca83c35096b13f4202edd3be9f5a946869039d203736681d1b08eb3614" exitCode=0 Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.090228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c5717348-ff61-4e62-9c41-3553228842f9","Type":"ContainerDied","Data":"73f28fca83c35096b13f4202edd3be9f5a946869039d203736681d1b08eb3614"} Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.093034 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"79685aab-e537-450f-aecc-0768e316bf66","Type":"ContainerStarted","Data":"f265d89c732b605bd69807172d2dbeecc91e3f8d36747455bbbe2ad997dfbf0e"} Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.095813 4813 generic.go:334] "Generic (PLEG): container finished" podID="53e726ce-4b04-4f80-b0a6-20919949a0e6" containerID="67173c36ace04a93060cb6e49d7083005ace4a985fa191b3f7753bbd17cfe1fa" exitCode=0 Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.095876 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"53e726ce-4b04-4f80-b0a6-20919949a0e6","Type":"ContainerDied","Data":"67173c36ace04a93060cb6e49d7083005ace4a985fa191b3f7753bbd17cfe1fa"} Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.100570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zmgkl" event={"ID":"2dbe4376-1955-47b0-9d67-0d2188ef1532","Type":"ContainerStarted","Data":"5fa5717840aa882087c1f628edc50a3e7a043eeff84fb3bb510971b5d4b2f778"} Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.100876 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.100992 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.129501 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.8936851 podStartE2EDuration="17.12948267s" podCreationTimestamp="2025-12-02 10:33:57 +0000 UTC" firstStartedPulling="2025-12-02 10:34:04.359429572 +0000 UTC m=+1568.554603874" lastFinishedPulling="2025-12-02 10:34:13.595227142 +0000 UTC m=+1577.790401444" observedRunningTime="2025-12-02 10:34:14.123625784 +0000 UTC m=+1578.318800096" watchObservedRunningTime="2025-12-02 10:34:14.12948267 +0000 UTC m=+1578.324656972" Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.173941 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.687237689 podStartE2EDuration="20.173924539s" podCreationTimestamp="2025-12-02 10:33:54 +0000 UTC" firstStartedPulling="2025-12-02 10:34:02.094256767 +0000 UTC m=+1566.289431069" lastFinishedPulling="2025-12-02 10:34:13.580943617 +0000 UTC m=+1577.776117919" observedRunningTime="2025-12-02 10:34:14.163660249 +0000 UTC m=+1578.358834551" watchObservedRunningTime="2025-12-02 10:34:14.173924539 +0000 UTC m=+1578.369098841" Dec 02 10:34:14 crc kubenswrapper[4813]: I1202 10:34:14.189864 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zmgkl" podStartSLOduration=13.749827521 podStartE2EDuration="20.189844381s" podCreationTimestamp="2025-12-02 10:33:54 +0000 UTC" firstStartedPulling="2025-12-02 10:34:02.199674014 +0000 UTC m=+1566.394848326" lastFinishedPulling="2025-12-02 10:34:08.639690884 +0000 UTC m=+1572.834865186" observedRunningTime="2025-12-02 10:34:14.186405573 +0000 UTC m=+1578.381579905" watchObservedRunningTime="2025-12-02 10:34:14.189844381 +0000 UTC m=+1578.385018683" Dec 02 10:34:15 crc kubenswrapper[4813]: I1202 10:34:15.110030 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"53e726ce-4b04-4f80-b0a6-20919949a0e6","Type":"ContainerStarted","Data":"7c6a002fe9e3301440e71031cbd2d6efdbb5fd82a5687ae706ad30f62e273ab4"} Dec 02 10:34:15 crc kubenswrapper[4813]: I1202 10:34:15.113486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c5717348-ff61-4e62-9c41-3553228842f9","Type":"ContainerStarted","Data":"9ea22ed02aeac2a188f0154417dc9f1e3d771d9b8d73a51ecb819063147792d5"} Dec 02 10:34:15 crc kubenswrapper[4813]: I1202 10:34:15.135133 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.820331289 podStartE2EDuration="30.135105995s" podCreationTimestamp="2025-12-02 10:33:45 +0000 UTC" firstStartedPulling="2025-12-02 10:34:01.634767667 +0000 UTC m=+1565.829941969" lastFinishedPulling="2025-12-02 10:34:08.949542373 +0000 UTC m=+1573.144716675" observedRunningTime="2025-12-02 10:34:15.129394043 +0000 UTC m=+1579.324568345" watchObservedRunningTime="2025-12-02 10:34:15.135105995 +0000 UTC m=+1579.330280297" Dec 02 10:34:15 crc kubenswrapper[4813]: I1202 10:34:15.152597 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.165583242 podStartE2EDuration="29.15258054s" podCreationTimestamp="2025-12-02 10:33:46 +0000 UTC" firstStartedPulling="2025-12-02 10:34:01.651969445 +0000 UTC m=+1565.847143757" lastFinishedPulling="2025-12-02 10:34:08.638966753 +0000 UTC m=+1572.834141055" observedRunningTime="2025-12-02 10:34:15.146872699 +0000 UTC m=+1579.342047001" watchObservedRunningTime="2025-12-02 10:34:15.15258054 +0000 UTC m=+1579.347754842" Dec 02 10:34:15 crc kubenswrapper[4813]: I1202 10:34:15.464511 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 02 10:34:16 crc kubenswrapper[4813]: I1202 10:34:16.464970 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 02 10:34:16 crc kubenswrapper[4813]: I1202 10:34:16.500332 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 02 10:34:16 crc kubenswrapper[4813]: I1202 10:34:16.500427 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 02 10:34:16 crc kubenswrapper[4813]: I1202 10:34:16.513170 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 02 10:34:16 crc kubenswrapper[4813]: I1202 10:34:16.763109 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 02 10:34:16 crc kubenswrapper[4813]: I1202 10:34:16.801226 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.127059 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.164528 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.166852 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.368412 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-bf66g"] Dec 02 10:34:17 crc kubenswrapper[4813]: E1202 10:34:17.369117 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" containerName="dnsmasq-dns" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.369216 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" containerName="dnsmasq-dns" Dec 02 10:34:17 crc kubenswrapper[4813]: E1202 10:34:17.369296 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" containerName="init" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.369397 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" containerName="init" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.369658 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c09d6b-89ce-458f-9848-f8f3e7e3c7ee" containerName="dnsmasq-dns" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.370578 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.377928 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.394064 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-bf66g"] Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.425346 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wwqk7"] Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.426368 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.429240 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.448567 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wwqk7"] Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.528776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scz59\" (UniqueName: \"kubernetes.io/projected/e08b977d-5597-4076-8ea1-21301801b3b1-kube-api-access-scz59\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.528819 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-config\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.528853 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e08b977d-5597-4076-8ea1-21301801b3b1-ovn-rundir\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.528882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08b977d-5597-4076-8ea1-21301801b3b1-combined-ca-bundle\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.529166 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e08b977d-5597-4076-8ea1-21301801b3b1-ovs-rundir\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.529202 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.529227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08b977d-5597-4076-8ea1-21301801b3b1-config\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.529555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.529582 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlq9\" (UniqueName: \"kubernetes.io/projected/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-kube-api-access-vdlq9\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.529604 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08b977d-5597-4076-8ea1-21301801b3b1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.544250 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-bf66g"] Dec 02 10:34:17 crc kubenswrapper[4813]: E1202 10:34:17.544933 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-vdlq9 ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" podUID="589e27d1-011a-4b21-9dcc-b4f99cc6ea72" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.580671 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-p7898"] Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.582040 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.589166 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.600932 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.616674 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.624289 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.624345 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.624484 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tcq57" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.624532 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.633434 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08b977d-5597-4076-8ea1-21301801b3b1-combined-ca-bundle\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.633616 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e08b977d-5597-4076-8ea1-21301801b3b1-ovs-rundir\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.633738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.633804 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08b977d-5597-4076-8ea1-21301801b3b1-config\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.633834 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.633881 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdlq9\" (UniqueName: \"kubernetes.io/projected/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-kube-api-access-vdlq9\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.633927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08b977d-5597-4076-8ea1-21301801b3b1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.633961 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-p7898"] Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.634141 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scz59\" (UniqueName: \"kubernetes.io/projected/e08b977d-5597-4076-8ea1-21301801b3b1-kube-api-access-scz59\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.634224 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-config\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.634309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e08b977d-5597-4076-8ea1-21301801b3b1-ovn-rundir\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.635034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e08b977d-5597-4076-8ea1-21301801b3b1-ovn-rundir\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.635772 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-config\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.644700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e08b977d-5597-4076-8ea1-21301801b3b1-ovs-rundir\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.645100 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08b977d-5597-4076-8ea1-21301801b3b1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.645363 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08b977d-5597-4076-8ea1-21301801b3b1-config\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.645682 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.651287 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.653325 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.655986 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08b977d-5597-4076-8ea1-21301801b3b1-combined-ca-bundle\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.657125 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scz59\" (UniqueName: \"kubernetes.io/projected/e08b977d-5597-4076-8ea1-21301801b3b1-kube-api-access-scz59\") pod \"ovn-controller-metrics-wwqk7\" (UID: \"e08b977d-5597-4076-8ea1-21301801b3b1\") " pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.683239 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdlq9\" (UniqueName: \"kubernetes.io/projected/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-kube-api-access-vdlq9\") pod \"dnsmasq-dns-7f896c8c65-bf66g\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736005 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736063 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736126 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-config\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736322 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736350 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43a02521-6143-4cfa-89c6-4b7e536990d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736408 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43a02521-6143-4cfa-89c6-4b7e536990d8-config\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43a02521-6143-4cfa-89c6-4b7e536990d8-scripts\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9m8b\" (UniqueName: \"kubernetes.io/projected/608bffa9-2843-4086-a271-a1ffb445b00a-kube-api-access-j9m8b\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppltm\" (UniqueName: \"kubernetes.io/projected/43a02521-6143-4cfa-89c6-4b7e536990d8-kube-api-access-ppltm\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.736805 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.746494 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.746787 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.758396 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wwqk7" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.837989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838109 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-config\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838196 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43a02521-6143-4cfa-89c6-4b7e536990d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43a02521-6143-4cfa-89c6-4b7e536990d8-config\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43a02521-6143-4cfa-89c6-4b7e536990d8-scripts\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838315 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9m8b\" (UniqueName: \"kubernetes.io/projected/608bffa9-2843-4086-a271-a1ffb445b00a-kube-api-access-j9m8b\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppltm\" (UniqueName: \"kubernetes.io/projected/43a02521-6143-4cfa-89c6-4b7e536990d8-kube-api-access-ppltm\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.838396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.840413 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.840564 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43a02521-6143-4cfa-89c6-4b7e536990d8-scripts\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.840804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-config\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.840998 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43a02521-6143-4cfa-89c6-4b7e536990d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.841065 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43a02521-6143-4cfa-89c6-4b7e536990d8-config\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.841695 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.843301 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.844854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.845839 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.846626 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a02521-6143-4cfa-89c6-4b7e536990d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.861248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9m8b\" (UniqueName: \"kubernetes.io/projected/608bffa9-2843-4086-a271-a1ffb445b00a-kube-api-access-j9m8b\") pod \"dnsmasq-dns-86db49b7ff-p7898\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.862261 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppltm\" (UniqueName: \"kubernetes.io/projected/43a02521-6143-4cfa-89c6-4b7e536990d8-kube-api-access-ppltm\") pod \"ovn-northd-0\" (UID: \"43a02521-6143-4cfa-89c6-4b7e536990d8\") " pod="openstack/ovn-northd-0" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.901557 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:17 crc kubenswrapper[4813]: I1202 10:34:17.942153 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.125214 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.132904 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.154175 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.232616 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-p7898"] Dec 02 10:34:18 crc kubenswrapper[4813]: W1202 10:34:18.235368 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod608bffa9_2843_4086_a271_a1ffb445b00a.slice/crio-c460490e73d1eb8ee8817939df16d6152e3cf0cf7d2b161165c90ede6a28ee9f WatchSource:0}: Error finding container c460490e73d1eb8ee8817939df16d6152e3cf0cf7d2b161165c90ede6a28ee9f: Status 404 returned error can't find the container with id c460490e73d1eb8ee8817939df16d6152e3cf0cf7d2b161165c90ede6a28ee9f Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.240507 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wwqk7"] Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.248233 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-config\") pod \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.248283 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-ovsdbserver-sb\") pod \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.248384 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdlq9\" (UniqueName: \"kubernetes.io/projected/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-kube-api-access-vdlq9\") pod \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.248480 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-dns-svc\") pod \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\" (UID: \"589e27d1-011a-4b21-9dcc-b4f99cc6ea72\") " Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.249010 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "589e27d1-011a-4b21-9dcc-b4f99cc6ea72" (UID: "589e27d1-011a-4b21-9dcc-b4f99cc6ea72"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.249197 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "589e27d1-011a-4b21-9dcc-b4f99cc6ea72" (UID: "589e27d1-011a-4b21-9dcc-b4f99cc6ea72"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.249350 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.249374 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.249057 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-config" (OuterVolumeSpecName: "config") pod "589e27d1-011a-4b21-9dcc-b4f99cc6ea72" (UID: "589e27d1-011a-4b21-9dcc-b4f99cc6ea72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.251727 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-kube-api-access-vdlq9" (OuterVolumeSpecName: "kube-api-access-vdlq9") pod "589e27d1-011a-4b21-9dcc-b4f99cc6ea72" (UID: "589e27d1-011a-4b21-9dcc-b4f99cc6ea72"). InnerVolumeSpecName "kube-api-access-vdlq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.351261 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdlq9\" (UniqueName: \"kubernetes.io/projected/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-kube-api-access-vdlq9\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.351293 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/589e27d1-011a-4b21-9dcc-b4f99cc6ea72-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.487692 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 10:34:18 crc kubenswrapper[4813]: W1202 10:34:18.520043 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a02521_6143_4cfa_89c6_4b7e536990d8.slice/crio-e9254f20655f4e1b30467e2114c1a4e305959c281cf58fa55b3ec52120601fbb WatchSource:0}: Error finding container e9254f20655f4e1b30467e2114c1a4e305959c281cf58fa55b3ec52120601fbb: Status 404 returned error can't find the container with id e9254f20655f4e1b30467e2114c1a4e305959c281cf58fa55b3ec52120601fbb Dec 02 10:34:18 crc kubenswrapper[4813]: I1202 10:34:18.990171 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.062416 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.142804 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"43a02521-6143-4cfa-89c6-4b7e536990d8","Type":"ContainerStarted","Data":"e9254f20655f4e1b30467e2114c1a4e305959c281cf58fa55b3ec52120601fbb"} Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.144718 4813 generic.go:334] "Generic (PLEG): container finished" podID="608bffa9-2843-4086-a271-a1ffb445b00a" containerID="6ed282d37298592386b882dfb357d7f8cb68e118e33252c0e033d13c06667601" exitCode=0 Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.144779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" event={"ID":"608bffa9-2843-4086-a271-a1ffb445b00a","Type":"ContainerDied","Data":"6ed282d37298592386b882dfb357d7f8cb68e118e33252c0e033d13c06667601"} Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.144803 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" event={"ID":"608bffa9-2843-4086-a271-a1ffb445b00a","Type":"ContainerStarted","Data":"c460490e73d1eb8ee8817939df16d6152e3cf0cf7d2b161165c90ede6a28ee9f"} Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.149111 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-bf66g" Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.149111 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wwqk7" event={"ID":"e08b977d-5597-4076-8ea1-21301801b3b1","Type":"ContainerStarted","Data":"1d7645273bba726765a590ffec8d26580df0fb41a350dc12b8098ec3d8470e7e"} Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.149186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wwqk7" event={"ID":"e08b977d-5597-4076-8ea1-21301801b3b1","Type":"ContainerStarted","Data":"6f411101f09487dd2a1a0010b35363f9defe0c3672fda1fe5a5720f95a63d474"} Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.198888 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wwqk7" podStartSLOduration=2.198866373 podStartE2EDuration="2.198866373s" podCreationTimestamp="2025-12-02 10:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:19.186798611 +0000 UTC m=+1583.381972913" watchObservedRunningTime="2025-12-02 10:34:19.198866373 +0000 UTC m=+1583.394040675" Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.294237 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-bf66g"] Dec 02 10:34:19 crc kubenswrapper[4813]: I1202 10:34:19.299893 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-bf66g"] Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.080439 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="589e27d1-011a-4b21-9dcc-b4f99cc6ea72" path="/var/lib/kubelet/pods/589e27d1-011a-4b21-9dcc-b4f99cc6ea72/volumes" Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.159019 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"43a02521-6143-4cfa-89c6-4b7e536990d8","Type":"ContainerStarted","Data":"afbeac98634ac457ef0b42afa906ddc43c56528f96cb30dc85c07b72ce2ccc2b"} Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.159119 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.159134 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"43a02521-6143-4cfa-89c6-4b7e536990d8","Type":"ContainerStarted","Data":"544476c78d665bcf5ea02a53aefd52b0cddf7197e918a58c6ec024094b000806"} Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.162455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" event={"ID":"608bffa9-2843-4086-a271-a1ffb445b00a","Type":"ContainerStarted","Data":"970a8636ab28dd4f4e87d458526aae325fd2b6140fc5dd85ec5b430dee52b282"} Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.162634 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.177704 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.9719386349999999 podStartE2EDuration="3.177687279s" podCreationTimestamp="2025-12-02 10:34:17 +0000 UTC" firstStartedPulling="2025-12-02 10:34:18.522386216 +0000 UTC m=+1582.717560518" lastFinishedPulling="2025-12-02 10:34:19.72813486 +0000 UTC m=+1583.923309162" observedRunningTime="2025-12-02 10:34:20.174474798 +0000 UTC m=+1584.369649120" watchObservedRunningTime="2025-12-02 10:34:20.177687279 +0000 UTC m=+1584.372861581" Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.199906 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" podStartSLOduration=3.199887078 podStartE2EDuration="3.199887078s" podCreationTimestamp="2025-12-02 10:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:20.193469486 +0000 UTC m=+1584.388643808" watchObservedRunningTime="2025-12-02 10:34:20.199887078 +0000 UTC m=+1584.395061380" Dec 02 10:34:20 crc kubenswrapper[4813]: I1202 10:34:20.857245 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 10:34:21 crc kubenswrapper[4813]: I1202 10:34:21.825277 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 02 10:34:21 crc kubenswrapper[4813]: I1202 10:34:21.912649 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 02 10:34:23 crc kubenswrapper[4813]: I1202 10:34:23.068177 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:34:23 crc kubenswrapper[4813]: E1202 10:34:23.068418 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.739238 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-49bf-account-create-update-vd578"] Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.741006 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.745387 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.754158 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-49bf-account-create-update-vd578"] Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.802825 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-l5fzt"] Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.803794 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.807262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8x7f\" (UniqueName: \"kubernetes.io/projected/ac720eab-315f-4740-adb3-329feec1e5ef-kube-api-access-t8x7f\") pod \"keystone-49bf-account-create-update-vd578\" (UID: \"ac720eab-315f-4740-adb3-329feec1e5ef\") " pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.807529 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac720eab-315f-4740-adb3-329feec1e5ef-operator-scripts\") pod \"keystone-49bf-account-create-update-vd578\" (UID: \"ac720eab-315f-4740-adb3-329feec1e5ef\") " pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.821263 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l5fzt"] Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.903253 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.908593 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wth\" (UniqueName: \"kubernetes.io/projected/41332d45-348e-4649-9343-110363ba5ee0-kube-api-access-f9wth\") pod \"keystone-db-create-l5fzt\" (UID: \"41332d45-348e-4649-9343-110363ba5ee0\") " pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.908706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41332d45-348e-4649-9343-110363ba5ee0-operator-scripts\") pod \"keystone-db-create-l5fzt\" (UID: \"41332d45-348e-4649-9343-110363ba5ee0\") " pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.908744 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac720eab-315f-4740-adb3-329feec1e5ef-operator-scripts\") pod \"keystone-49bf-account-create-update-vd578\" (UID: \"ac720eab-315f-4740-adb3-329feec1e5ef\") " pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.908799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8x7f\" (UniqueName: \"kubernetes.io/projected/ac720eab-315f-4740-adb3-329feec1e5ef-kube-api-access-t8x7f\") pod \"keystone-49bf-account-create-update-vd578\" (UID: \"ac720eab-315f-4740-adb3-329feec1e5ef\") " pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.910509 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac720eab-315f-4740-adb3-329feec1e5ef-operator-scripts\") pod \"keystone-49bf-account-create-update-vd578\" (UID: \"ac720eab-315f-4740-adb3-329feec1e5ef\") " pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.936269 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8x7f\" (UniqueName: \"kubernetes.io/projected/ac720eab-315f-4740-adb3-329feec1e5ef-kube-api-access-t8x7f\") pod \"keystone-49bf-account-create-update-vd578\" (UID: \"ac720eab-315f-4740-adb3-329feec1e5ef\") " pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.965959 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wknc6"] Dec 02 10:34:27 crc kubenswrapper[4813]: I1202 10:34:27.966272 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" podUID="020a3922-2d98-4790-81af-81c2f00f5389" containerName="dnsmasq-dns" containerID="cri-o://998b2164ba86e3ab852a292ff53c415deffe34d68b581c3756096bfbceaab572" gracePeriod=10 Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.011992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wth\" (UniqueName: \"kubernetes.io/projected/41332d45-348e-4649-9343-110363ba5ee0-kube-api-access-f9wth\") pod \"keystone-db-create-l5fzt\" (UID: \"41332d45-348e-4649-9343-110363ba5ee0\") " pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.012420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41332d45-348e-4649-9343-110363ba5ee0-operator-scripts\") pod \"keystone-db-create-l5fzt\" (UID: \"41332d45-348e-4649-9343-110363ba5ee0\") " pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.013712 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41332d45-348e-4649-9343-110363ba5ee0-operator-scripts\") pod \"keystone-db-create-l5fzt\" (UID: \"41332d45-348e-4649-9343-110363ba5ee0\") " pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.018100 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-k64ck"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.019553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k64ck" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.037519 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k64ck"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.041086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wth\" (UniqueName: \"kubernetes.io/projected/41332d45-348e-4649-9343-110363ba5ee0-kube-api-access-f9wth\") pod \"keystone-db-create-l5fzt\" (UID: \"41332d45-348e-4649-9343-110363ba5ee0\") " pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.064466 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.114044 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-operator-scripts\") pod \"placement-db-create-k64ck\" (UID: \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\") " pod="openstack/placement-db-create-k64ck" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.114137 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22hlc\" (UniqueName: \"kubernetes.io/projected/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-kube-api-access-22hlc\") pod \"placement-db-create-k64ck\" (UID: \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\") " pod="openstack/placement-db-create-k64ck" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.120433 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.121435 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-445c-account-create-update-8rjvh"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.123421 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.125791 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.132222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-445c-account-create-update-8rjvh"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.220146 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-operator-scripts\") pod \"placement-db-create-k64ck\" (UID: \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\") " pod="openstack/placement-db-create-k64ck" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.220689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfr9h\" (UniqueName: \"kubernetes.io/projected/b8a79b83-9538-4382-acb8-b44688f3e2ce-kube-api-access-jfr9h\") pod \"placement-445c-account-create-update-8rjvh\" (UID: \"b8a79b83-9538-4382-acb8-b44688f3e2ce\") " pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.221163 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22hlc\" (UniqueName: \"kubernetes.io/projected/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-kube-api-access-22hlc\") pod \"placement-db-create-k64ck\" (UID: \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\") " pod="openstack/placement-db-create-k64ck" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.221399 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a79b83-9538-4382-acb8-b44688f3e2ce-operator-scripts\") pod \"placement-445c-account-create-update-8rjvh\" (UID: \"b8a79b83-9538-4382-acb8-b44688f3e2ce\") " pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.222621 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-operator-scripts\") pod \"placement-db-create-k64ck\" (UID: \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\") " pod="openstack/placement-db-create-k64ck" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.249409 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22hlc\" (UniqueName: \"kubernetes.io/projected/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-kube-api-access-22hlc\") pod \"placement-db-create-k64ck\" (UID: \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\") " pod="openstack/placement-db-create-k64ck" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.282186 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-q8lrk"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.283455 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.300534 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-q8lrk"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.322955 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfr9h\" (UniqueName: \"kubernetes.io/projected/b8a79b83-9538-4382-acb8-b44688f3e2ce-kube-api-access-jfr9h\") pod \"placement-445c-account-create-update-8rjvh\" (UID: \"b8a79b83-9538-4382-acb8-b44688f3e2ce\") " pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.323056 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392875ff-4cd5-400d-b6e3-4c07d4b332ec-operator-scripts\") pod \"glance-db-create-q8lrk\" (UID: \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\") " pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.323122 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx274\" (UniqueName: \"kubernetes.io/projected/392875ff-4cd5-400d-b6e3-4c07d4b332ec-kube-api-access-cx274\") pod \"glance-db-create-q8lrk\" (UID: \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\") " pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.323161 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a79b83-9538-4382-acb8-b44688f3e2ce-operator-scripts\") pod \"placement-445c-account-create-update-8rjvh\" (UID: \"b8a79b83-9538-4382-acb8-b44688f3e2ce\") " pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.324177 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a79b83-9538-4382-acb8-b44688f3e2ce-operator-scripts\") pod \"placement-445c-account-create-update-8rjvh\" (UID: \"b8a79b83-9538-4382-acb8-b44688f3e2ce\") " pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.344241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfr9h\" (UniqueName: \"kubernetes.io/projected/b8a79b83-9538-4382-acb8-b44688f3e2ce-kube-api-access-jfr9h\") pod \"placement-445c-account-create-update-8rjvh\" (UID: \"b8a79b83-9538-4382-acb8-b44688f3e2ce\") " pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.368383 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9de0-account-create-update-w7fll"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.369721 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.374413 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.376778 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9de0-account-create-update-w7fll"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.386288 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k64ck" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.425149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-operator-scripts\") pod \"glance-9de0-account-create-update-w7fll\" (UID: \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\") " pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.425301 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392875ff-4cd5-400d-b6e3-4c07d4b332ec-operator-scripts\") pod \"glance-db-create-q8lrk\" (UID: \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\") " pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.425542 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v9wx\" (UniqueName: \"kubernetes.io/projected/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-kube-api-access-2v9wx\") pod \"glance-9de0-account-create-update-w7fll\" (UID: \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\") " pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.425731 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx274\" (UniqueName: \"kubernetes.io/projected/392875ff-4cd5-400d-b6e3-4c07d4b332ec-kube-api-access-cx274\") pod \"glance-db-create-q8lrk\" (UID: \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\") " pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.426099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392875ff-4cd5-400d-b6e3-4c07d4b332ec-operator-scripts\") pod \"glance-db-create-q8lrk\" (UID: \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\") " pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.445436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx274\" (UniqueName: \"kubernetes.io/projected/392875ff-4cd5-400d-b6e3-4c07d4b332ec-kube-api-access-cx274\") pod \"glance-db-create-q8lrk\" (UID: \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\") " pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.525787 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.527225 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-operator-scripts\") pod \"glance-9de0-account-create-update-w7fll\" (UID: \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\") " pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.527340 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v9wx\" (UniqueName: \"kubernetes.io/projected/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-kube-api-access-2v9wx\") pod \"glance-9de0-account-create-update-w7fll\" (UID: \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\") " pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.529779 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-operator-scripts\") pod \"glance-9de0-account-create-update-w7fll\" (UID: \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\") " pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.546153 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v9wx\" (UniqueName: \"kubernetes.io/projected/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-kube-api-access-2v9wx\") pod \"glance-9de0-account-create-update-w7fll\" (UID: \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\") " pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.610559 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:28 crc kubenswrapper[4813]: W1202 10:34:28.616805 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac720eab_315f_4740_adb3_329feec1e5ef.slice/crio-ba0955867dd95bd0e363996980dd9d6796d673477ed67feb48483260aff323d0 WatchSource:0}: Error finding container ba0955867dd95bd0e363996980dd9d6796d673477ed67feb48483260aff323d0: Status 404 returned error can't find the container with id ba0955867dd95bd0e363996980dd9d6796d673477ed67feb48483260aff323d0 Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.618445 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-49bf-account-create-update-vd578"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.677094 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-l5fzt"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.686825 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.802437 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-445c-account-create-update-8rjvh"] Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.838891 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" podUID="020a3922-2d98-4790-81af-81c2f00f5389" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.101:5353: connect: connection refused" Dec 02 10:34:28 crc kubenswrapper[4813]: I1202 10:34:28.879819 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k64ck"] Dec 02 10:34:29 crc kubenswrapper[4813]: I1202 10:34:29.031395 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9de0-account-create-update-w7fll"] Dec 02 10:34:29 crc kubenswrapper[4813]: W1202 10:34:29.038202 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e5d7f6a_a5a9_4bc8_a12d_0be10887c252.slice/crio-481c2b6800eec7c1e8caa72cfe8499ecb03fd8650b7f114d51984c7133a02aa3 WatchSource:0}: Error finding container 481c2b6800eec7c1e8caa72cfe8499ecb03fd8650b7f114d51984c7133a02aa3: Status 404 returned error can't find the container with id 481c2b6800eec7c1e8caa72cfe8499ecb03fd8650b7f114d51984c7133a02aa3 Dec 02 10:34:29 crc kubenswrapper[4813]: I1202 10:34:29.130530 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-q8lrk"] Dec 02 10:34:29 crc kubenswrapper[4813]: W1202 10:34:29.142683 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392875ff_4cd5_400d_b6e3_4c07d4b332ec.slice/crio-ba77b5c2f4b27790819e4aa3b40c68f6ff83c9e1a3ac3320c77c47b147370263 WatchSource:0}: Error finding container ba77b5c2f4b27790819e4aa3b40c68f6ff83c9e1a3ac3320c77c47b147370263: Status 404 returned error can't find the container with id ba77b5c2f4b27790819e4aa3b40c68f6ff83c9e1a3ac3320c77c47b147370263 Dec 02 10:34:29 crc kubenswrapper[4813]: I1202 10:34:29.233045 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-445c-account-create-update-8rjvh" event={"ID":"b8a79b83-9538-4382-acb8-b44688f3e2ce","Type":"ContainerStarted","Data":"e641604dc23da51a34865e9dfe154246a46e225a6dac49e25804a4399b18d185"} Dec 02 10:34:29 crc kubenswrapper[4813]: I1202 10:34:29.234160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9de0-account-create-update-w7fll" event={"ID":"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252","Type":"ContainerStarted","Data":"481c2b6800eec7c1e8caa72cfe8499ecb03fd8650b7f114d51984c7133a02aa3"} Dec 02 10:34:29 crc kubenswrapper[4813]: I1202 10:34:29.238348 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l5fzt" event={"ID":"41332d45-348e-4649-9343-110363ba5ee0","Type":"ContainerStarted","Data":"bf61f19808aeeeae9df185dde80d75e95a8b6e50e3567ece9a528b42d5ee2187"} Dec 02 10:34:29 crc kubenswrapper[4813]: I1202 10:34:29.239573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k64ck" event={"ID":"9f1a6c3c-c5fd-402d-9fda-b497be370d4c","Type":"ContainerStarted","Data":"bd58b48eaff05c6b77b837560547f95ac38c1e7f1d1c730ce0fb3b11377e3ff1"} Dec 02 10:34:29 crc kubenswrapper[4813]: I1202 10:34:29.240606 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-49bf-account-create-update-vd578" event={"ID":"ac720eab-315f-4740-adb3-329feec1e5ef","Type":"ContainerStarted","Data":"ba0955867dd95bd0e363996980dd9d6796d673477ed67feb48483260aff323d0"} Dec 02 10:34:29 crc kubenswrapper[4813]: I1202 10:34:29.241443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-q8lrk" event={"ID":"392875ff-4cd5-400d-b6e3-4c07d4b332ec","Type":"ContainerStarted","Data":"ba77b5c2f4b27790819e4aa3b40c68f6ff83c9e1a3ac3320c77c47b147370263"} Dec 02 10:34:30 crc kubenswrapper[4813]: I1202 10:34:30.250638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-49bf-account-create-update-vd578" event={"ID":"ac720eab-315f-4740-adb3-329feec1e5ef","Type":"ContainerStarted","Data":"b19f88f9ad986048b32b44369beaa8a98ce923f5327f11c3d7eba5cdb68e94ef"} Dec 02 10:34:30 crc kubenswrapper[4813]: I1202 10:34:30.255272 4813 generic.go:334] "Generic (PLEG): container finished" podID="020a3922-2d98-4790-81af-81c2f00f5389" containerID="998b2164ba86e3ab852a292ff53c415deffe34d68b581c3756096bfbceaab572" exitCode=0 Dec 02 10:34:30 crc kubenswrapper[4813]: I1202 10:34:30.255318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" event={"ID":"020a3922-2d98-4790-81af-81c2f00f5389","Type":"ContainerDied","Data":"998b2164ba86e3ab852a292ff53c415deffe34d68b581c3756096bfbceaab572"} Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.125641 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.180168 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-config\") pod \"020a3922-2d98-4790-81af-81c2f00f5389\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.180268 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8gz6\" (UniqueName: \"kubernetes.io/projected/020a3922-2d98-4790-81af-81c2f00f5389-kube-api-access-x8gz6\") pod \"020a3922-2d98-4790-81af-81c2f00f5389\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.180445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-dns-svc\") pod \"020a3922-2d98-4790-81af-81c2f00f5389\" (UID: \"020a3922-2d98-4790-81af-81c2f00f5389\") " Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.185476 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/020a3922-2d98-4790-81af-81c2f00f5389-kube-api-access-x8gz6" (OuterVolumeSpecName: "kube-api-access-x8gz6") pod "020a3922-2d98-4790-81af-81c2f00f5389" (UID: "020a3922-2d98-4790-81af-81c2f00f5389"). InnerVolumeSpecName "kube-api-access-x8gz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.217482 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "020a3922-2d98-4790-81af-81c2f00f5389" (UID: "020a3922-2d98-4790-81af-81c2f00f5389"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.224370 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-config" (OuterVolumeSpecName: "config") pod "020a3922-2d98-4790-81af-81c2f00f5389" (UID: "020a3922-2d98-4790-81af-81c2f00f5389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.265380 4813 generic.go:334] "Generic (PLEG): container finished" podID="392875ff-4cd5-400d-b6e3-4c07d4b332ec" containerID="b8910dd4bb237b8de01a04bfe71d6c05c4ac6ce6590291541f5aa8da83858747" exitCode=0 Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.265477 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-q8lrk" event={"ID":"392875ff-4cd5-400d-b6e3-4c07d4b332ec","Type":"ContainerDied","Data":"b8910dd4bb237b8de01a04bfe71d6c05c4ac6ce6590291541f5aa8da83858747"} Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.270102 4813 generic.go:334] "Generic (PLEG): container finished" podID="ac720eab-315f-4740-adb3-329feec1e5ef" containerID="b19f88f9ad986048b32b44369beaa8a98ce923f5327f11c3d7eba5cdb68e94ef" exitCode=0 Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.270234 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-49bf-account-create-update-vd578" event={"ID":"ac720eab-315f-4740-adb3-329feec1e5ef","Type":"ContainerDied","Data":"b19f88f9ad986048b32b44369beaa8a98ce923f5327f11c3d7eba5cdb68e94ef"} Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.271563 4813 generic.go:334] "Generic (PLEG): container finished" podID="b8a79b83-9538-4382-acb8-b44688f3e2ce" containerID="501561c880e33e41c85923830c2bdc838ecebad31ecf8419f8a4acd3639e0719" exitCode=0 Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.271623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-445c-account-create-update-8rjvh" event={"ID":"b8a79b83-9538-4382-acb8-b44688f3e2ce","Type":"ContainerDied","Data":"501561c880e33e41c85923830c2bdc838ecebad31ecf8419f8a4acd3639e0719"} Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.272887 4813 generic.go:334] "Generic (PLEG): container finished" podID="2e5d7f6a-a5a9-4bc8-a12d-0be10887c252" containerID="203a43e61a3787fb1b06fd4f4c559f16cb75de12a716543a2a02efe839a8aa8e" exitCode=0 Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.272976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9de0-account-create-update-w7fll" event={"ID":"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252","Type":"ContainerDied","Data":"203a43e61a3787fb1b06fd4f4c559f16cb75de12a716543a2a02efe839a8aa8e"} Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.274470 4813 generic.go:334] "Generic (PLEG): container finished" podID="41332d45-348e-4649-9343-110363ba5ee0" containerID="fd46bb4f91584876c71c901bd4665392b6c2066e1959637c7d41c7de3588b228" exitCode=0 Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.274623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l5fzt" event={"ID":"41332d45-348e-4649-9343-110363ba5ee0","Type":"ContainerDied","Data":"fd46bb4f91584876c71c901bd4665392b6c2066e1959637c7d41c7de3588b228"} Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.279174 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" event={"ID":"020a3922-2d98-4790-81af-81c2f00f5389","Type":"ContainerDied","Data":"bcf082fda2d8cbcc6b12dd444aa1c6e4ef6551bfceafbe78c5010c00d478b368"} Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.279215 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wknc6" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.279222 4813 scope.go:117] "RemoveContainer" containerID="998b2164ba86e3ab852a292ff53c415deffe34d68b581c3756096bfbceaab572" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.280396 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f1a6c3c-c5fd-402d-9fda-b497be370d4c" containerID="07c206cae49861eb5882c29952aaa0898704e1f8cb40b7d499d16de897b8c911" exitCode=0 Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.280524 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k64ck" event={"ID":"9f1a6c3c-c5fd-402d-9fda-b497be370d4c","Type":"ContainerDied","Data":"07c206cae49861eb5882c29952aaa0898704e1f8cb40b7d499d16de897b8c911"} Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.282537 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.282650 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/020a3922-2d98-4790-81af-81c2f00f5389-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.282737 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8gz6\" (UniqueName: \"kubernetes.io/projected/020a3922-2d98-4790-81af-81c2f00f5389-kube-api-access-x8gz6\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.300348 4813 scope.go:117] "RemoveContainer" containerID="e7fa24156ec5aeba609f8760de4e9960057c234b145cd1116dec945ebf203b5d" Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.361510 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wknc6"] Dec 02 10:34:31 crc kubenswrapper[4813]: I1202 10:34:31.372103 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wknc6"] Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.077454 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="020a3922-2d98-4790-81af-81c2f00f5389" path="/var/lib/kubelet/pods/020a3922-2d98-4790-81af-81c2f00f5389/volumes" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.648157 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.705475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41332d45-348e-4649-9343-110363ba5ee0-operator-scripts\") pod \"41332d45-348e-4649-9343-110363ba5ee0\" (UID: \"41332d45-348e-4649-9343-110363ba5ee0\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.705567 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9wth\" (UniqueName: \"kubernetes.io/projected/41332d45-348e-4649-9343-110363ba5ee0-kube-api-access-f9wth\") pod \"41332d45-348e-4649-9343-110363ba5ee0\" (UID: \"41332d45-348e-4649-9343-110363ba5ee0\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.706708 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41332d45-348e-4649-9343-110363ba5ee0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41332d45-348e-4649-9343-110363ba5ee0" (UID: "41332d45-348e-4649-9343-110363ba5ee0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.711918 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41332d45-348e-4649-9343-110363ba5ee0-kube-api-access-f9wth" (OuterVolumeSpecName: "kube-api-access-f9wth") pod "41332d45-348e-4649-9343-110363ba5ee0" (UID: "41332d45-348e-4649-9343-110363ba5ee0"). InnerVolumeSpecName "kube-api-access-f9wth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.807417 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41332d45-348e-4649-9343-110363ba5ee0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.807454 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9wth\" (UniqueName: \"kubernetes.io/projected/41332d45-348e-4649-9343-110363ba5ee0-kube-api-access-f9wth\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.834058 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.842737 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.854136 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k64ck" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.866585 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.881020 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.908661 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-operator-scripts\") pod \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\" (UID: \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.908726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac720eab-315f-4740-adb3-329feec1e5ef-operator-scripts\") pod \"ac720eab-315f-4740-adb3-329feec1e5ef\" (UID: \"ac720eab-315f-4740-adb3-329feec1e5ef\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.908774 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx274\" (UniqueName: \"kubernetes.io/projected/392875ff-4cd5-400d-b6e3-4c07d4b332ec-kube-api-access-cx274\") pod \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\" (UID: \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.908817 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22hlc\" (UniqueName: \"kubernetes.io/projected/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-kube-api-access-22hlc\") pod \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\" (UID: \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.908876 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v9wx\" (UniqueName: \"kubernetes.io/projected/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-kube-api-access-2v9wx\") pod \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\" (UID: \"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.908918 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfr9h\" (UniqueName: \"kubernetes.io/projected/b8a79b83-9538-4382-acb8-b44688f3e2ce-kube-api-access-jfr9h\") pod \"b8a79b83-9538-4382-acb8-b44688f3e2ce\" (UID: \"b8a79b83-9538-4382-acb8-b44688f3e2ce\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.909229 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e5d7f6a-a5a9-4bc8-a12d-0be10887c252" (UID: "2e5d7f6a-a5a9-4bc8-a12d-0be10887c252"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.909247 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac720eab-315f-4740-adb3-329feec1e5ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac720eab-315f-4740-adb3-329feec1e5ef" (UID: "ac720eab-315f-4740-adb3-329feec1e5ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.909483 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8x7f\" (UniqueName: \"kubernetes.io/projected/ac720eab-315f-4740-adb3-329feec1e5ef-kube-api-access-t8x7f\") pod \"ac720eab-315f-4740-adb3-329feec1e5ef\" (UID: \"ac720eab-315f-4740-adb3-329feec1e5ef\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.909621 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a79b83-9538-4382-acb8-b44688f3e2ce-operator-scripts\") pod \"b8a79b83-9538-4382-acb8-b44688f3e2ce\" (UID: \"b8a79b83-9538-4382-acb8-b44688f3e2ce\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.909656 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-operator-scripts\") pod \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\" (UID: \"9f1a6c3c-c5fd-402d-9fda-b497be370d4c\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.909691 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392875ff-4cd5-400d-b6e3-4c07d4b332ec-operator-scripts\") pod \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\" (UID: \"392875ff-4cd5-400d-b6e3-4c07d4b332ec\") " Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.910114 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.910140 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac720eab-315f-4740-adb3-329feec1e5ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.910995 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a79b83-9538-4382-acb8-b44688f3e2ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8a79b83-9538-4382-acb8-b44688f3e2ce" (UID: "b8a79b83-9538-4382-acb8-b44688f3e2ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.914144 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a79b83-9538-4382-acb8-b44688f3e2ce-kube-api-access-jfr9h" (OuterVolumeSpecName: "kube-api-access-jfr9h") pod "b8a79b83-9538-4382-acb8-b44688f3e2ce" (UID: "b8a79b83-9538-4382-acb8-b44688f3e2ce"). InnerVolumeSpecName "kube-api-access-jfr9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.915037 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-kube-api-access-2v9wx" (OuterVolumeSpecName: "kube-api-access-2v9wx") pod "2e5d7f6a-a5a9-4bc8-a12d-0be10887c252" (UID: "2e5d7f6a-a5a9-4bc8-a12d-0be10887c252"). InnerVolumeSpecName "kube-api-access-2v9wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.915183 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392875ff-4cd5-400d-b6e3-4c07d4b332ec-kube-api-access-cx274" (OuterVolumeSpecName: "kube-api-access-cx274") pod "392875ff-4cd5-400d-b6e3-4c07d4b332ec" (UID: "392875ff-4cd5-400d-b6e3-4c07d4b332ec"). InnerVolumeSpecName "kube-api-access-cx274". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.916584 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-kube-api-access-22hlc" (OuterVolumeSpecName: "kube-api-access-22hlc") pod "9f1a6c3c-c5fd-402d-9fda-b497be370d4c" (UID: "9f1a6c3c-c5fd-402d-9fda-b497be370d4c"). InnerVolumeSpecName "kube-api-access-22hlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.917138 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f1a6c3c-c5fd-402d-9fda-b497be370d4c" (UID: "9f1a6c3c-c5fd-402d-9fda-b497be370d4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.917416 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392875ff-4cd5-400d-b6e3-4c07d4b332ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "392875ff-4cd5-400d-b6e3-4c07d4b332ec" (UID: "392875ff-4cd5-400d-b6e3-4c07d4b332ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:32 crc kubenswrapper[4813]: I1202 10:34:32.919323 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac720eab-315f-4740-adb3-329feec1e5ef-kube-api-access-t8x7f" (OuterVolumeSpecName: "kube-api-access-t8x7f") pod "ac720eab-315f-4740-adb3-329feec1e5ef" (UID: "ac720eab-315f-4740-adb3-329feec1e5ef"). InnerVolumeSpecName "kube-api-access-t8x7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.007369 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.011515 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v9wx\" (UniqueName: \"kubernetes.io/projected/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252-kube-api-access-2v9wx\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.011562 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfr9h\" (UniqueName: \"kubernetes.io/projected/b8a79b83-9538-4382-acb8-b44688f3e2ce-kube-api-access-jfr9h\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.011576 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8x7f\" (UniqueName: \"kubernetes.io/projected/ac720eab-315f-4740-adb3-329feec1e5ef-kube-api-access-t8x7f\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.011590 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8a79b83-9538-4382-acb8-b44688f3e2ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.011602 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.011614 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392875ff-4cd5-400d-b6e3-4c07d4b332ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.011626 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx274\" (UniqueName: \"kubernetes.io/projected/392875ff-4cd5-400d-b6e3-4c07d4b332ec-kube-api-access-cx274\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.011637 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22hlc\" (UniqueName: \"kubernetes.io/projected/9f1a6c3c-c5fd-402d-9fda-b497be370d4c-kube-api-access-22hlc\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.300093 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9de0-account-create-update-w7fll" event={"ID":"2e5d7f6a-a5a9-4bc8-a12d-0be10887c252","Type":"ContainerDied","Data":"481c2b6800eec7c1e8caa72cfe8499ecb03fd8650b7f114d51984c7133a02aa3"} Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.300145 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="481c2b6800eec7c1e8caa72cfe8499ecb03fd8650b7f114d51984c7133a02aa3" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.300166 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9de0-account-create-update-w7fll" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.304305 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-l5fzt" event={"ID":"41332d45-348e-4649-9343-110363ba5ee0","Type":"ContainerDied","Data":"bf61f19808aeeeae9df185dde80d75e95a8b6e50e3567ece9a528b42d5ee2187"} Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.304352 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf61f19808aeeeae9df185dde80d75e95a8b6e50e3567ece9a528b42d5ee2187" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.304577 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-l5fzt" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.306194 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k64ck" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.306185 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k64ck" event={"ID":"9f1a6c3c-c5fd-402d-9fda-b497be370d4c","Type":"ContainerDied","Data":"bd58b48eaff05c6b77b837560547f95ac38c1e7f1d1c730ce0fb3b11377e3ff1"} Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.306322 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd58b48eaff05c6b77b837560547f95ac38c1e7f1d1c730ce0fb3b11377e3ff1" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.308582 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-q8lrk" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.308583 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-q8lrk" event={"ID":"392875ff-4cd5-400d-b6e3-4c07d4b332ec","Type":"ContainerDied","Data":"ba77b5c2f4b27790819e4aa3b40c68f6ff83c9e1a3ac3320c77c47b147370263"} Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.308696 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba77b5c2f4b27790819e4aa3b40c68f6ff83c9e1a3ac3320c77c47b147370263" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.310125 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-49bf-account-create-update-vd578" event={"ID":"ac720eab-315f-4740-adb3-329feec1e5ef","Type":"ContainerDied","Data":"ba0955867dd95bd0e363996980dd9d6796d673477ed67feb48483260aff323d0"} Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.310160 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba0955867dd95bd0e363996980dd9d6796d673477ed67feb48483260aff323d0" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.310210 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-49bf-account-create-update-vd578" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.313616 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-445c-account-create-update-8rjvh" event={"ID":"b8a79b83-9538-4382-acb8-b44688f3e2ce","Type":"ContainerDied","Data":"e641604dc23da51a34865e9dfe154246a46e225a6dac49e25804a4399b18d185"} Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.313650 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e641604dc23da51a34865e9dfe154246a46e225a6dac49e25804a4399b18d185" Dec 02 10:34:33 crc kubenswrapper[4813]: I1202 10:34:33.313693 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-445c-account-create-update-8rjvh" Dec 02 10:34:34 crc kubenswrapper[4813]: I1202 10:34:34.068252 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:34:34 crc kubenswrapper[4813]: E1202 10:34:34.068597 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.606914 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2mwx2"] Dec 02 10:34:38 crc kubenswrapper[4813]: E1202 10:34:38.608776 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5d7f6a-a5a9-4bc8-a12d-0be10887c252" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.608851 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5d7f6a-a5a9-4bc8-a12d-0be10887c252" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: E1202 10:34:38.608923 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a79b83-9538-4382-acb8-b44688f3e2ce" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.608975 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a79b83-9538-4382-acb8-b44688f3e2ce" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: E1202 10:34:38.609041 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020a3922-2d98-4790-81af-81c2f00f5389" containerName="dnsmasq-dns" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.609120 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="020a3922-2d98-4790-81af-81c2f00f5389" containerName="dnsmasq-dns" Dec 02 10:34:38 crc kubenswrapper[4813]: E1202 10:34:38.609179 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392875ff-4cd5-400d-b6e3-4c07d4b332ec" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.609234 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="392875ff-4cd5-400d-b6e3-4c07d4b332ec" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: E1202 10:34:38.609292 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="020a3922-2d98-4790-81af-81c2f00f5389" containerName="init" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.609350 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="020a3922-2d98-4790-81af-81c2f00f5389" containerName="init" Dec 02 10:34:38 crc kubenswrapper[4813]: E1202 10:34:38.609414 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac720eab-315f-4740-adb3-329feec1e5ef" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.609470 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac720eab-315f-4740-adb3-329feec1e5ef" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: E1202 10:34:38.609538 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1a6c3c-c5fd-402d-9fda-b497be370d4c" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.609593 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1a6c3c-c5fd-402d-9fda-b497be370d4c" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: E1202 10:34:38.609649 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41332d45-348e-4649-9343-110363ba5ee0" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.609704 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="41332d45-348e-4649-9343-110363ba5ee0" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.609921 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="392875ff-4cd5-400d-b6e3-4c07d4b332ec" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.609990 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac720eab-315f-4740-adb3-329feec1e5ef" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.610083 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a79b83-9538-4382-acb8-b44688f3e2ce" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.610166 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="020a3922-2d98-4790-81af-81c2f00f5389" containerName="dnsmasq-dns" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.610228 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1a6c3c-c5fd-402d-9fda-b497be370d4c" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.610288 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5d7f6a-a5a9-4bc8-a12d-0be10887c252" containerName="mariadb-account-create-update" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.610346 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="41332d45-348e-4649-9343-110363ba5ee0" containerName="mariadb-database-create" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.610905 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.616917 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2mwx2"] Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.617984 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j2c5w" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.618205 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.703624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-combined-ca-bundle\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.703970 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-config-data\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.704034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-db-sync-config-data\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.704064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6rzh\" (UniqueName: \"kubernetes.io/projected/4f5e6919-d274-40a5-b500-77f83781a452-kube-api-access-g6rzh\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.805890 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6rzh\" (UniqueName: \"kubernetes.io/projected/4f5e6919-d274-40a5-b500-77f83781a452-kube-api-access-g6rzh\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.806004 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-combined-ca-bundle\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.806049 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-config-data\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.806124 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-db-sync-config-data\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.811410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-combined-ca-bundle\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.811575 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-config-data\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.811795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-db-sync-config-data\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.827020 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6rzh\" (UniqueName: \"kubernetes.io/projected/4f5e6919-d274-40a5-b500-77f83781a452-kube-api-access-g6rzh\") pod \"glance-db-sync-2mwx2\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:38 crc kubenswrapper[4813]: I1202 10:34:38.926708 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2mwx2" Dec 02 10:34:39 crc kubenswrapper[4813]: I1202 10:34:39.432536 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2mwx2"] Dec 02 10:34:39 crc kubenswrapper[4813]: W1202 10:34:39.439186 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f5e6919_d274_40a5_b500_77f83781a452.slice/crio-ef8c44476a0058307d619b177b69cd5590e4623d6ef1456acc93d4fea84dbee2 WatchSource:0}: Error finding container ef8c44476a0058307d619b177b69cd5590e4623d6ef1456acc93d4fea84dbee2: Status 404 returned error can't find the container with id ef8c44476a0058307d619b177b69cd5590e4623d6ef1456acc93d4fea84dbee2 Dec 02 10:34:40 crc kubenswrapper[4813]: I1202 10:34:40.366794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2mwx2" event={"ID":"4f5e6919-d274-40a5-b500-77f83781a452","Type":"ContainerStarted","Data":"ef8c44476a0058307d619b177b69cd5590e4623d6ef1456acc93d4fea84dbee2"} Dec 02 10:34:43 crc kubenswrapper[4813]: I1202 10:34:43.412919 4813 generic.go:334] "Generic (PLEG): container finished" podID="250ea07a-903e-418f-adf4-0e720a9807f6" containerID="bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79" exitCode=0 Dec 02 10:34:43 crc kubenswrapper[4813]: I1202 10:34:43.413027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"250ea07a-903e-418f-adf4-0e720a9807f6","Type":"ContainerDied","Data":"bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79"} Dec 02 10:34:43 crc kubenswrapper[4813]: I1202 10:34:43.420550 4813 generic.go:334] "Generic (PLEG): container finished" podID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerID="6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5" exitCode=0 Dec 02 10:34:43 crc kubenswrapper[4813]: I1202 10:34:43.420606 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa","Type":"ContainerDied","Data":"6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5"} Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.437419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa","Type":"ContainerStarted","Data":"139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5"} Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.437735 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.442029 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"250ea07a-903e-418f-adf4-0e720a9807f6","Type":"ContainerStarted","Data":"e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93"} Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.442375 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.464025 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.479210936 podStartE2EDuration="1m1.464002452s" podCreationTimestamp="2025-12-02 10:33:43 +0000 UTC" firstStartedPulling="2025-12-02 10:34:01.652412637 +0000 UTC m=+1565.847586939" lastFinishedPulling="2025-12-02 10:34:08.637204153 +0000 UTC m=+1572.832378455" observedRunningTime="2025-12-02 10:34:44.461061859 +0000 UTC m=+1608.656236161" watchObservedRunningTime="2025-12-02 10:34:44.464002452 +0000 UTC m=+1608.659176754" Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.489213 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.036564914 podStartE2EDuration="1m1.489192566s" podCreationTimestamp="2025-12-02 10:33:43 +0000 UTC" firstStartedPulling="2025-12-02 10:34:01.184640773 +0000 UTC m=+1565.379815075" lastFinishedPulling="2025-12-02 10:34:08.637268425 +0000 UTC m=+1572.832442727" observedRunningTime="2025-12-02 10:34:44.48721454 +0000 UTC m=+1608.682388852" watchObservedRunningTime="2025-12-02 10:34:44.489192566 +0000 UTC m=+1608.684366868" Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.636392 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hfxg4" podUID="0ce6e9c3-8bfa-4bea-8b33-497328af7573" containerName="ovn-controller" probeResult="failure" output=< Dec 02 10:34:44 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 02 10:34:44 crc kubenswrapper[4813]: > Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.682779 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.690222 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zmgkl" Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.929460 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hfxg4-config-t8lf2"] Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.930874 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.936044 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 02 10:34:44 crc kubenswrapper[4813]: I1202 10:34:44.938366 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hfxg4-config-t8lf2"] Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.118562 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-additional-scripts\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.118993 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrcq4\" (UniqueName: \"kubernetes.io/projected/2c312ae1-f7c2-4528-9136-b5cc077067c8-kube-api-access-wrcq4\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.119679 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-scripts\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.119735 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.119794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-log-ovn\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.119817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run-ovn\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.221673 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrcq4\" (UniqueName: \"kubernetes.io/projected/2c312ae1-f7c2-4528-9136-b5cc077067c8-kube-api-access-wrcq4\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.221728 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-scripts\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.221756 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.221816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-log-ovn\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.221839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run-ovn\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.221874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-additional-scripts\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.222121 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.222160 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-log-ovn\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.222204 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run-ovn\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.222663 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-additional-scripts\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.224802 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-scripts\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.246881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrcq4\" (UniqueName: \"kubernetes.io/projected/2c312ae1-f7c2-4528-9136-b5cc077067c8-kube-api-access-wrcq4\") pod \"ovn-controller-hfxg4-config-t8lf2\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:45 crc kubenswrapper[4813]: I1202 10:34:45.251918 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:46 crc kubenswrapper[4813]: I1202 10:34:46.074109 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:34:46 crc kubenswrapper[4813]: E1202 10:34:46.074401 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:34:49 crc kubenswrapper[4813]: I1202 10:34:49.624599 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hfxg4" podUID="0ce6e9c3-8bfa-4bea-8b33-497328af7573" containerName="ovn-controller" probeResult="failure" output=< Dec 02 10:34:49 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 02 10:34:49 crc kubenswrapper[4813]: > Dec 02 10:34:51 crc kubenswrapper[4813]: I1202 10:34:51.508310 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hfxg4-config-t8lf2"] Dec 02 10:34:52 crc kubenswrapper[4813]: I1202 10:34:52.524362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2mwx2" event={"ID":"4f5e6919-d274-40a5-b500-77f83781a452","Type":"ContainerStarted","Data":"24b2344a94f10dc1e6be602a5ecccfd78a0b3422886aa38b7fb3105e3bd8afcb"} Dec 02 10:34:52 crc kubenswrapper[4813]: I1202 10:34:52.526768 4813 generic.go:334] "Generic (PLEG): container finished" podID="2c312ae1-f7c2-4528-9136-b5cc077067c8" containerID="f81750011cdae84975844e776f53d009e1ea89b1d8a433410d1c3ae5e4f7287a" exitCode=0 Dec 02 10:34:52 crc kubenswrapper[4813]: I1202 10:34:52.526801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hfxg4-config-t8lf2" event={"ID":"2c312ae1-f7c2-4528-9136-b5cc077067c8","Type":"ContainerDied","Data":"f81750011cdae84975844e776f53d009e1ea89b1d8a433410d1c3ae5e4f7287a"} Dec 02 10:34:52 crc kubenswrapper[4813]: I1202 10:34:52.526823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hfxg4-config-t8lf2" event={"ID":"2c312ae1-f7c2-4528-9136-b5cc077067c8","Type":"ContainerStarted","Data":"bd13c397c7f3892aa1b410b32788b002979479ffdb61766912678aa80e0c16a6"} Dec 02 10:34:52 crc kubenswrapper[4813]: I1202 10:34:52.542738 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2mwx2" podStartSLOduration=2.8368020830000003 podStartE2EDuration="14.542723437s" podCreationTimestamp="2025-12-02 10:34:38 +0000 UTC" firstStartedPulling="2025-12-02 10:34:39.443043161 +0000 UTC m=+1603.638217463" lastFinishedPulling="2025-12-02 10:34:51.148964515 +0000 UTC m=+1615.344138817" observedRunningTime="2025-12-02 10:34:52.536549612 +0000 UTC m=+1616.731723914" watchObservedRunningTime="2025-12-02 10:34:52.542723437 +0000 UTC m=+1616.737897739" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.834525 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.974400 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run-ovn\") pod \"2c312ae1-f7c2-4528-9136-b5cc077067c8\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.974471 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrcq4\" (UniqueName: \"kubernetes.io/projected/2c312ae1-f7c2-4528-9136-b5cc077067c8-kube-api-access-wrcq4\") pod \"2c312ae1-f7c2-4528-9136-b5cc077067c8\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.974631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-additional-scripts\") pod \"2c312ae1-f7c2-4528-9136-b5cc077067c8\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.974606 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2c312ae1-f7c2-4528-9136-b5cc077067c8" (UID: "2c312ae1-f7c2-4528-9136-b5cc077067c8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.974886 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-scripts\") pod \"2c312ae1-f7c2-4528-9136-b5cc077067c8\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.974928 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-log-ovn\") pod \"2c312ae1-f7c2-4528-9136-b5cc077067c8\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.974945 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run\") pod \"2c312ae1-f7c2-4528-9136-b5cc077067c8\" (UID: \"2c312ae1-f7c2-4528-9136-b5cc077067c8\") " Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.975085 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2c312ae1-f7c2-4528-9136-b5cc077067c8" (UID: "2c312ae1-f7c2-4528-9136-b5cc077067c8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.975240 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run" (OuterVolumeSpecName: "var-run") pod "2c312ae1-f7c2-4528-9136-b5cc077067c8" (UID: "2c312ae1-f7c2-4528-9136-b5cc077067c8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.975688 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.975717 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.975729 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c312ae1-f7c2-4528-9136-b5cc077067c8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.976143 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2c312ae1-f7c2-4528-9136-b5cc077067c8" (UID: "2c312ae1-f7c2-4528-9136-b5cc077067c8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.976388 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-scripts" (OuterVolumeSpecName: "scripts") pod "2c312ae1-f7c2-4528-9136-b5cc077067c8" (UID: "2c312ae1-f7c2-4528-9136-b5cc077067c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:53 crc kubenswrapper[4813]: I1202 10:34:53.985423 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c312ae1-f7c2-4528-9136-b5cc077067c8-kube-api-access-wrcq4" (OuterVolumeSpecName: "kube-api-access-wrcq4") pod "2c312ae1-f7c2-4528-9136-b5cc077067c8" (UID: "2c312ae1-f7c2-4528-9136-b5cc077067c8"). InnerVolumeSpecName "kube-api-access-wrcq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.081595 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.081712 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrcq4\" (UniqueName: \"kubernetes.io/projected/2c312ae1-f7c2-4528-9136-b5cc077067c8-kube-api-access-wrcq4\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.081734 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c312ae1-f7c2-4528-9136-b5cc077067c8-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.542838 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hfxg4-config-t8lf2" event={"ID":"2c312ae1-f7c2-4528-9136-b5cc077067c8","Type":"ContainerDied","Data":"bd13c397c7f3892aa1b410b32788b002979479ffdb61766912678aa80e0c16a6"} Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.542880 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd13c397c7f3892aa1b410b32788b002979479ffdb61766912678aa80e0c16a6" Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.542894 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4-config-t8lf2" Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.621292 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hfxg4" Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.683028 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.980892 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hfxg4-config-t8lf2"] Dec 02 10:34:54 crc kubenswrapper[4813]: I1202 10:34:54.991048 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hfxg4-config-t8lf2"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.024260 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.189946 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xqrjx"] Dec 02 10:34:55 crc kubenswrapper[4813]: E1202 10:34:55.190366 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c312ae1-f7c2-4528-9136-b5cc077067c8" containerName="ovn-config" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.190390 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c312ae1-f7c2-4528-9136-b5cc077067c8" containerName="ovn-config" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.190600 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c312ae1-f7c2-4528-9136-b5cc077067c8" containerName="ovn-config" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.191236 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.203413 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xqrjx"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.244392 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hfxg4-config-dlgtm"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.247589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.254892 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.262824 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hfxg4-config-dlgtm"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.284172 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bcl76"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.285386 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bcl76" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.291760 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-802e-account-create-update-ggb6g"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.293561 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.295649 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.300616 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5ddb27-75c7-48ac-9357-409e97f3020e-operator-scripts\") pod \"cinder-db-create-xqrjx\" (UID: \"8f5ddb27-75c7-48ac-9357-409e97f3020e\") " pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.300668 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vrgw\" (UniqueName: \"kubernetes.io/projected/8f5ddb27-75c7-48ac-9357-409e97f3020e-kube-api-access-6vrgw\") pod \"cinder-db-create-xqrjx\" (UID: \"8f5ddb27-75c7-48ac-9357-409e97f3020e\") " pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.337561 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bcl76"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.376133 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-802e-account-create-update-ggb6g"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.397598 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-20a8-account-create-update-8lpn8"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.398912 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.402680 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.403755 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run-ovn\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.403796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-log-ovn\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.403837 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f206f49-f7ca-479b-843e-2377ddb90ce1-operator-scripts\") pod \"barbican-db-create-bcl76\" (UID: \"8f206f49-f7ca-479b-843e-2377ddb90ce1\") " pod="openstack/barbican-db-create-bcl76" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.403878 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsldp\" (UniqueName: \"kubernetes.io/projected/b20b5439-344c-4a1b-a474-d9e7939e7e3e-kube-api-access-qsldp\") pod \"barbican-802e-account-create-update-ggb6g\" (UID: \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\") " pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.403904 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgns\" (UniqueName: \"kubernetes.io/projected/8f206f49-f7ca-479b-843e-2377ddb90ce1-kube-api-access-qtgns\") pod \"barbican-db-create-bcl76\" (UID: \"8f206f49-f7ca-479b-843e-2377ddb90ce1\") " pod="openstack/barbican-db-create-bcl76" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.403972 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tz4\" (UniqueName: \"kubernetes.io/projected/06549f00-ca55-4da7-9b6f-e3011d0550cc-kube-api-access-27tz4\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.404007 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-scripts\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.404034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-additional-scripts\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.404090 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20b5439-344c-4a1b-a474-d9e7939e7e3e-operator-scripts\") pod \"barbican-802e-account-create-update-ggb6g\" (UID: \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\") " pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.404145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5ddb27-75c7-48ac-9357-409e97f3020e-operator-scripts\") pod \"cinder-db-create-xqrjx\" (UID: \"8f5ddb27-75c7-48ac-9357-409e97f3020e\") " pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.404175 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vrgw\" (UniqueName: \"kubernetes.io/projected/8f5ddb27-75c7-48ac-9357-409e97f3020e-kube-api-access-6vrgw\") pod \"cinder-db-create-xqrjx\" (UID: \"8f5ddb27-75c7-48ac-9357-409e97f3020e\") " pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.404204 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.405145 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5ddb27-75c7-48ac-9357-409e97f3020e-operator-scripts\") pod \"cinder-db-create-xqrjx\" (UID: \"8f5ddb27-75c7-48ac-9357-409e97f3020e\") " pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.405429 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-20a8-account-create-update-8lpn8"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.441854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vrgw\" (UniqueName: \"kubernetes.io/projected/8f5ddb27-75c7-48ac-9357-409e97f3020e-kube-api-access-6vrgw\") pod \"cinder-db-create-xqrjx\" (UID: \"8f5ddb27-75c7-48ac-9357-409e97f3020e\") " pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.477692 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hn79k"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.478722 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.482943 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.483324 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tsjhn" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.483534 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.483661 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.498838 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hn79k"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506179 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmgqt\" (UniqueName: \"kubernetes.io/projected/3b2a6381-8062-465d-b129-ac4153f9305e-kube-api-access-fmgqt\") pod \"cinder-20a8-account-create-update-8lpn8\" (UID: \"3b2a6381-8062-465d-b129-ac4153f9305e\") " pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run-ovn\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506279 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-log-ovn\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f206f49-f7ca-479b-843e-2377ddb90ce1-operator-scripts\") pod \"barbican-db-create-bcl76\" (UID: \"8f206f49-f7ca-479b-843e-2377ddb90ce1\") " pod="openstack/barbican-db-create-bcl76" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsldp\" (UniqueName: \"kubernetes.io/projected/b20b5439-344c-4a1b-a474-d9e7939e7e3e-kube-api-access-qsldp\") pod \"barbican-802e-account-create-update-ggb6g\" (UID: \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\") " pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgns\" (UniqueName: \"kubernetes.io/projected/8f206f49-f7ca-479b-843e-2377ddb90ce1-kube-api-access-qtgns\") pod \"barbican-db-create-bcl76\" (UID: \"8f206f49-f7ca-479b-843e-2377ddb90ce1\") " pod="openstack/barbican-db-create-bcl76" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506385 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b2a6381-8062-465d-b129-ac4153f9305e-operator-scripts\") pod \"cinder-20a8-account-create-update-8lpn8\" (UID: \"3b2a6381-8062-465d-b129-ac4153f9305e\") " pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506417 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tz4\" (UniqueName: \"kubernetes.io/projected/06549f00-ca55-4da7-9b6f-e3011d0550cc-kube-api-access-27tz4\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506440 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-scripts\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-additional-scripts\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506488 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20b5439-344c-4a1b-a474-d9e7939e7e3e-operator-scripts\") pod \"barbican-802e-account-create-update-ggb6g\" (UID: \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\") " pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run-ovn\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506569 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-log-ovn\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.506604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.507233 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20b5439-344c-4a1b-a474-d9e7939e7e3e-operator-scripts\") pod \"barbican-802e-account-create-update-ggb6g\" (UID: \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\") " pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.507565 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-additional-scripts\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.507570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f206f49-f7ca-479b-843e-2377ddb90ce1-operator-scripts\") pod \"barbican-db-create-bcl76\" (UID: \"8f206f49-f7ca-479b-843e-2377ddb90ce1\") " pod="openstack/barbican-db-create-bcl76" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.508853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-scripts\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.512937 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.525483 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgns\" (UniqueName: \"kubernetes.io/projected/8f206f49-f7ca-479b-843e-2377ddb90ce1-kube-api-access-qtgns\") pod \"barbican-db-create-bcl76\" (UID: \"8f206f49-f7ca-479b-843e-2377ddb90ce1\") " pod="openstack/barbican-db-create-bcl76" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.528087 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tz4\" (UniqueName: \"kubernetes.io/projected/06549f00-ca55-4da7-9b6f-e3011d0550cc-kube-api-access-27tz4\") pod \"ovn-controller-hfxg4-config-dlgtm\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.536578 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsldp\" (UniqueName: \"kubernetes.io/projected/b20b5439-344c-4a1b-a474-d9e7939e7e3e-kube-api-access-qsldp\") pod \"barbican-802e-account-create-update-ggb6g\" (UID: \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\") " pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.574607 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.578862 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7skjt"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.580921 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7skjt" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.595222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7skjt"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.608213 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-config-data\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.608267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmgqt\" (UniqueName: \"kubernetes.io/projected/3b2a6381-8062-465d-b129-ac4153f9305e-kube-api-access-fmgqt\") pod \"cinder-20a8-account-create-update-8lpn8\" (UID: \"3b2a6381-8062-465d-b129-ac4153f9305e\") " pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.608400 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b2a6381-8062-465d-b129-ac4153f9305e-operator-scripts\") pod \"cinder-20a8-account-create-update-8lpn8\" (UID: \"3b2a6381-8062-465d-b129-ac4153f9305e\") " pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.608441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-combined-ca-bundle\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.608503 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgxk\" (UniqueName: \"kubernetes.io/projected/82676b1f-f4f3-42df-be67-62bbd3373116-kube-api-access-csgxk\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.621797 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.623050 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bcl76" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.623504 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b2a6381-8062-465d-b129-ac4153f9305e-operator-scripts\") pod \"cinder-20a8-account-create-update-8lpn8\" (UID: \"3b2a6381-8062-465d-b129-ac4153f9305e\") " pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.711088 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-combined-ca-bundle\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.711172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgxk\" (UniqueName: \"kubernetes.io/projected/82676b1f-f4f3-42df-be67-62bbd3373116-kube-api-access-csgxk\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.711217 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-config-data\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.711257 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ce95401-372b-4cad-b5b7-82d3575cd3da-operator-scripts\") pod \"neutron-db-create-7skjt\" (UID: \"7ce95401-372b-4cad-b5b7-82d3575cd3da\") " pod="openstack/neutron-db-create-7skjt" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.711316 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbws4\" (UniqueName: \"kubernetes.io/projected/7ce95401-372b-4cad-b5b7-82d3575cd3da-kube-api-access-hbws4\") pod \"neutron-db-create-7skjt\" (UID: \"7ce95401-372b-4cad-b5b7-82d3575cd3da\") " pod="openstack/neutron-db-create-7skjt" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.711685 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmgqt\" (UniqueName: \"kubernetes.io/projected/3b2a6381-8062-465d-b129-ac4153f9305e-kube-api-access-fmgqt\") pod \"cinder-20a8-account-create-update-8lpn8\" (UID: \"3b2a6381-8062-465d-b129-ac4153f9305e\") " pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.715773 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.717910 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7666-account-create-update-zclc9"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.719430 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.723528 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.729488 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-config-data\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.730667 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7666-account-create-update-zclc9"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.733853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgxk\" (UniqueName: \"kubernetes.io/projected/82676b1f-f4f3-42df-be67-62bbd3373116-kube-api-access-csgxk\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.734637 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-combined-ca-bundle\") pod \"keystone-db-sync-hn79k\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.810192 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hn79k" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.813368 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fda52a79-9d85-4a62-935e-b0e43270148c-operator-scripts\") pod \"neutron-7666-account-create-update-zclc9\" (UID: \"fda52a79-9d85-4a62-935e-b0e43270148c\") " pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.813418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ce95401-372b-4cad-b5b7-82d3575cd3da-operator-scripts\") pod \"neutron-db-create-7skjt\" (UID: \"7ce95401-372b-4cad-b5b7-82d3575cd3da\") " pod="openstack/neutron-db-create-7skjt" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.813441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntch\" (UniqueName: \"kubernetes.io/projected/fda52a79-9d85-4a62-935e-b0e43270148c-kube-api-access-kntch\") pod \"neutron-7666-account-create-update-zclc9\" (UID: \"fda52a79-9d85-4a62-935e-b0e43270148c\") " pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.813485 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbws4\" (UniqueName: \"kubernetes.io/projected/7ce95401-372b-4cad-b5b7-82d3575cd3da-kube-api-access-hbws4\") pod \"neutron-db-create-7skjt\" (UID: \"7ce95401-372b-4cad-b5b7-82d3575cd3da\") " pod="openstack/neutron-db-create-7skjt" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.814464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ce95401-372b-4cad-b5b7-82d3575cd3da-operator-scripts\") pod \"neutron-db-create-7skjt\" (UID: \"7ce95401-372b-4cad-b5b7-82d3575cd3da\") " pod="openstack/neutron-db-create-7skjt" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.832886 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbws4\" (UniqueName: \"kubernetes.io/projected/7ce95401-372b-4cad-b5b7-82d3575cd3da-kube-api-access-hbws4\") pod \"neutron-db-create-7skjt\" (UID: \"7ce95401-372b-4cad-b5b7-82d3575cd3da\") " pod="openstack/neutron-db-create-7skjt" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.855220 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xqrjx"] Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.914695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fda52a79-9d85-4a62-935e-b0e43270148c-operator-scripts\") pod \"neutron-7666-account-create-update-zclc9\" (UID: \"fda52a79-9d85-4a62-935e-b0e43270148c\") " pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.914774 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kntch\" (UniqueName: \"kubernetes.io/projected/fda52a79-9d85-4a62-935e-b0e43270148c-kube-api-access-kntch\") pod \"neutron-7666-account-create-update-zclc9\" (UID: \"fda52a79-9d85-4a62-935e-b0e43270148c\") " pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.916399 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fda52a79-9d85-4a62-935e-b0e43270148c-operator-scripts\") pod \"neutron-7666-account-create-update-zclc9\" (UID: \"fda52a79-9d85-4a62-935e-b0e43270148c\") " pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:34:55 crc kubenswrapper[4813]: I1202 10:34:55.941051 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kntch\" (UniqueName: \"kubernetes.io/projected/fda52a79-9d85-4a62-935e-b0e43270148c-kube-api-access-kntch\") pod \"neutron-7666-account-create-update-zclc9\" (UID: \"fda52a79-9d85-4a62-935e-b0e43270148c\") " pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.016064 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7skjt" Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.053842 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.084384 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c312ae1-f7c2-4528-9136-b5cc077067c8" path="/var/lib/kubelet/pods/2c312ae1-f7c2-4528-9136-b5cc077067c8/volumes" Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.180862 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bcl76"] Dec 02 10:34:56 crc kubenswrapper[4813]: W1202 10:34:56.185227 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f206f49_f7ca_479b_843e_2377ddb90ce1.slice/crio-13f7883e76db08be8500ccb6215879b552dd7fa13aa27e28fcc95cc3666b5a0a WatchSource:0}: Error finding container 13f7883e76db08be8500ccb6215879b552dd7fa13aa27e28fcc95cc3666b5a0a: Status 404 returned error can't find the container with id 13f7883e76db08be8500ccb6215879b552dd7fa13aa27e28fcc95cc3666b5a0a Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.237651 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hfxg4-config-dlgtm"] Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.443932 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-20a8-account-create-update-8lpn8"] Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.454634 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-802e-account-create-update-ggb6g"] Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.463038 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hn79k"] Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.469346 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 02 10:34:56 crc kubenswrapper[4813]: W1202 10:34:56.487573 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb20b5439_344c_4a1b_a474_d9e7939e7e3e.slice/crio-e263bbec6fad3153cd46e863384a9f445285bb515f597a1bf937f78e95fcd238 WatchSource:0}: Error finding container e263bbec6fad3153cd46e863384a9f445285bb515f597a1bf937f78e95fcd238: Status 404 returned error can't find the container with id e263bbec6fad3153cd46e863384a9f445285bb515f597a1bf937f78e95fcd238 Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.514637 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.591541 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hn79k" event={"ID":"82676b1f-f4f3-42df-be67-62bbd3373116","Type":"ContainerStarted","Data":"42deb5b1814d39d99128a89f01aa8fd0ddbb37cbf99a1e10ef707bc5b99b5a36"} Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.597232 4813 generic.go:334] "Generic (PLEG): container finished" podID="8f5ddb27-75c7-48ac-9357-409e97f3020e" containerID="cb30d85b987a54d447f5ca49e143873b4a00c21d4b57b7aaa2787b2ba3b45d12" exitCode=0 Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.597336 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xqrjx" event={"ID":"8f5ddb27-75c7-48ac-9357-409e97f3020e","Type":"ContainerDied","Data":"cb30d85b987a54d447f5ca49e143873b4a00c21d4b57b7aaa2787b2ba3b45d12"} Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.597373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xqrjx" event={"ID":"8f5ddb27-75c7-48ac-9357-409e97f3020e","Type":"ContainerStarted","Data":"bab1610c507021243013e7d2ba3cb522a71c5eea65b965cb27d6c4897cb1610d"} Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.603572 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bcl76" event={"ID":"8f206f49-f7ca-479b-843e-2377ddb90ce1","Type":"ContainerStarted","Data":"74641adce3de7b5e6d842c96a22e65c9d8c64e3ee25f65e4b837015a0a7a4871"} Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.603617 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bcl76" event={"ID":"8f206f49-f7ca-479b-843e-2377ddb90ce1","Type":"ContainerStarted","Data":"13f7883e76db08be8500ccb6215879b552dd7fa13aa27e28fcc95cc3666b5a0a"} Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.605431 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-20a8-account-create-update-8lpn8" event={"ID":"3b2a6381-8062-465d-b129-ac4153f9305e","Type":"ContainerStarted","Data":"f82702ccc79292d5f4c4c1b6a060aa1184b4d4cda0c42e86c5157d8ca54e0366"} Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.612669 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hfxg4-config-dlgtm" event={"ID":"06549f00-ca55-4da7-9b6f-e3011d0550cc","Type":"ContainerStarted","Data":"e881d01064dac965789a0edcc055514c0206aa583f09c79f8ed521bdee4a8966"} Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.617474 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7666-account-create-update-zclc9"] Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.618748 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-802e-account-create-update-ggb6g" event={"ID":"b20b5439-344c-4a1b-a474-d9e7939e7e3e","Type":"ContainerStarted","Data":"e263bbec6fad3153cd46e863384a9f445285bb515f597a1bf937f78e95fcd238"} Dec 02 10:34:56 crc kubenswrapper[4813]: W1202 10:34:56.650950 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ce95401_372b_4cad_b5b7_82d3575cd3da.slice/crio-7c92856096698fd0852aa19cbd3637630ee33117210060b1e52c57169fdcf196 WatchSource:0}: Error finding container 7c92856096698fd0852aa19cbd3637630ee33117210060b1e52c57169fdcf196: Status 404 returned error can't find the container with id 7c92856096698fd0852aa19cbd3637630ee33117210060b1e52c57169fdcf196 Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.654530 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7skjt"] Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.661301 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-bcl76" podStartSLOduration=1.6612825089999999 podStartE2EDuration="1.661282509s" podCreationTimestamp="2025-12-02 10:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:34:56.656182544 +0000 UTC m=+1620.851356846" watchObservedRunningTime="2025-12-02 10:34:56.661282509 +0000 UTC m=+1620.856456811" Dec 02 10:34:56 crc kubenswrapper[4813]: I1202 10:34:56.671603 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.632624 4813 generic.go:334] "Generic (PLEG): container finished" podID="8f206f49-f7ca-479b-843e-2377ddb90ce1" containerID="74641adce3de7b5e6d842c96a22e65c9d8c64e3ee25f65e4b837015a0a7a4871" exitCode=0 Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.632727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bcl76" event={"ID":"8f206f49-f7ca-479b-843e-2377ddb90ce1","Type":"ContainerDied","Data":"74641adce3de7b5e6d842c96a22e65c9d8c64e3ee25f65e4b837015a0a7a4871"} Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.634719 4813 generic.go:334] "Generic (PLEG): container finished" podID="3b2a6381-8062-465d-b129-ac4153f9305e" containerID="b30d81ad79714376ac0557f2827d267b0c16e6e8f01c5a35cb17e92f01b80a5d" exitCode=0 Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.634789 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-20a8-account-create-update-8lpn8" event={"ID":"3b2a6381-8062-465d-b129-ac4153f9305e","Type":"ContainerDied","Data":"b30d81ad79714376ac0557f2827d267b0c16e6e8f01c5a35cb17e92f01b80a5d"} Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.636530 4813 generic.go:334] "Generic (PLEG): container finished" podID="b20b5439-344c-4a1b-a474-d9e7939e7e3e" containerID="6c3cc1d466544b200a35473e7230fb8d848e14b7c37b4d6336bc119416ea6f66" exitCode=0 Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.636580 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-802e-account-create-update-ggb6g" event={"ID":"b20b5439-344c-4a1b-a474-d9e7939e7e3e","Type":"ContainerDied","Data":"6c3cc1d466544b200a35473e7230fb8d848e14b7c37b4d6336bc119416ea6f66"} Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.639916 4813 generic.go:334] "Generic (PLEG): container finished" podID="06549f00-ca55-4da7-9b6f-e3011d0550cc" containerID="615acd113da5506851dfa6e6fb83b319169b19f0349396678ef47107f39ad17f" exitCode=0 Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.639998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hfxg4-config-dlgtm" event={"ID":"06549f00-ca55-4da7-9b6f-e3011d0550cc","Type":"ContainerDied","Data":"615acd113da5506851dfa6e6fb83b319169b19f0349396678ef47107f39ad17f"} Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.642256 4813 generic.go:334] "Generic (PLEG): container finished" podID="fda52a79-9d85-4a62-935e-b0e43270148c" containerID="8f27dea7fc6bfae55d0759a6fc5425a67b4b47714abc449e80e7343168c169c1" exitCode=0 Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.642328 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7666-account-create-update-zclc9" event={"ID":"fda52a79-9d85-4a62-935e-b0e43270148c","Type":"ContainerDied","Data":"8f27dea7fc6bfae55d0759a6fc5425a67b4b47714abc449e80e7343168c169c1"} Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.642353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7666-account-create-update-zclc9" event={"ID":"fda52a79-9d85-4a62-935e-b0e43270148c","Type":"ContainerStarted","Data":"69b1a3b466e52d0f46e3789e3e71efe59bdf5148634a19ab510dd7cef801cb01"} Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.644331 4813 generic.go:334] "Generic (PLEG): container finished" podID="7ce95401-372b-4cad-b5b7-82d3575cd3da" containerID="35f669889e6a0a3f23b168c157de95aee3d09f35e476acd8aaf0334fe490ea58" exitCode=0 Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.644580 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7skjt" event={"ID":"7ce95401-372b-4cad-b5b7-82d3575cd3da","Type":"ContainerDied","Data":"35f669889e6a0a3f23b168c157de95aee3d09f35e476acd8aaf0334fe490ea58"} Dec 02 10:34:57 crc kubenswrapper[4813]: I1202 10:34:57.644609 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7skjt" event={"ID":"7ce95401-372b-4cad-b5b7-82d3575cd3da","Type":"ContainerStarted","Data":"7c92856096698fd0852aa19cbd3637630ee33117210060b1e52c57169fdcf196"} Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.042160 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.163195 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vrgw\" (UniqueName: \"kubernetes.io/projected/8f5ddb27-75c7-48ac-9357-409e97f3020e-kube-api-access-6vrgw\") pod \"8f5ddb27-75c7-48ac-9357-409e97f3020e\" (UID: \"8f5ddb27-75c7-48ac-9357-409e97f3020e\") " Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.163281 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5ddb27-75c7-48ac-9357-409e97f3020e-operator-scripts\") pod \"8f5ddb27-75c7-48ac-9357-409e97f3020e\" (UID: \"8f5ddb27-75c7-48ac-9357-409e97f3020e\") " Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.165257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f5ddb27-75c7-48ac-9357-409e97f3020e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f5ddb27-75c7-48ac-9357-409e97f3020e" (UID: "8f5ddb27-75c7-48ac-9357-409e97f3020e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.170698 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5ddb27-75c7-48ac-9357-409e97f3020e-kube-api-access-6vrgw" (OuterVolumeSpecName: "kube-api-access-6vrgw") pod "8f5ddb27-75c7-48ac-9357-409e97f3020e" (UID: "8f5ddb27-75c7-48ac-9357-409e97f3020e"). InnerVolumeSpecName "kube-api-access-6vrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.265792 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vrgw\" (UniqueName: \"kubernetes.io/projected/8f5ddb27-75c7-48ac-9357-409e97f3020e-kube-api-access-6vrgw\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.265836 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5ddb27-75c7-48ac-9357-409e97f3020e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.676733 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xqrjx" Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.676959 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xqrjx" event={"ID":"8f5ddb27-75c7-48ac-9357-409e97f3020e","Type":"ContainerDied","Data":"bab1610c507021243013e7d2ba3cb522a71c5eea65b965cb27d6c4897cb1610d"} Dec 02 10:34:58 crc kubenswrapper[4813]: I1202 10:34:58.677166 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bab1610c507021243013e7d2ba3cb522a71c5eea65b965cb27d6c4897cb1610d" Dec 02 10:34:59 crc kubenswrapper[4813]: I1202 10:34:59.069797 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:34:59 crc kubenswrapper[4813]: E1202 10:34:59.070156 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:34:59 crc kubenswrapper[4813]: I1202 10:34:59.689007 4813 generic.go:334] "Generic (PLEG): container finished" podID="4f5e6919-d274-40a5-b500-77f83781a452" containerID="24b2344a94f10dc1e6be602a5ecccfd78a0b3422886aa38b7fb3105e3bd8afcb" exitCode=0 Dec 02 10:34:59 crc kubenswrapper[4813]: I1202 10:34:59.689233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2mwx2" event={"ID":"4f5e6919-d274-40a5-b500-77f83781a452","Type":"ContainerDied","Data":"24b2344a94f10dc1e6be602a5ecccfd78a0b3422886aa38b7fb3105e3bd8afcb"} Dec 02 10:35:01 crc kubenswrapper[4813]: I1202 10:35:01.986373 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bcl76" Dec 02 10:35:01 crc kubenswrapper[4813]: I1202 10:35:01.995485 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7skjt" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.009707 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.025665 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.033625 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.048521 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.071352 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2mwx2" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.145927 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbws4\" (UniqueName: \"kubernetes.io/projected/7ce95401-372b-4cad-b5b7-82d3575cd3da-kube-api-access-hbws4\") pod \"7ce95401-372b-4cad-b5b7-82d3575cd3da\" (UID: \"7ce95401-372b-4cad-b5b7-82d3575cd3da\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146041 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27tz4\" (UniqueName: \"kubernetes.io/projected/06549f00-ca55-4da7-9b6f-e3011d0550cc-kube-api-access-27tz4\") pod \"06549f00-ca55-4da7-9b6f-e3011d0550cc\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146087 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-scripts\") pod \"06549f00-ca55-4da7-9b6f-e3011d0550cc\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146114 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmgqt\" (UniqueName: \"kubernetes.io/projected/3b2a6381-8062-465d-b129-ac4153f9305e-kube-api-access-fmgqt\") pod \"3b2a6381-8062-465d-b129-ac4153f9305e\" (UID: \"3b2a6381-8062-465d-b129-ac4153f9305e\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146171 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtgns\" (UniqueName: \"kubernetes.io/projected/8f206f49-f7ca-479b-843e-2377ddb90ce1-kube-api-access-qtgns\") pod \"8f206f49-f7ca-479b-843e-2377ddb90ce1\" (UID: \"8f206f49-f7ca-479b-843e-2377ddb90ce1\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146211 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f206f49-f7ca-479b-843e-2377ddb90ce1-operator-scripts\") pod \"8f206f49-f7ca-479b-843e-2377ddb90ce1\" (UID: \"8f206f49-f7ca-479b-843e-2377ddb90ce1\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146245 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kntch\" (UniqueName: \"kubernetes.io/projected/fda52a79-9d85-4a62-935e-b0e43270148c-kube-api-access-kntch\") pod \"fda52a79-9d85-4a62-935e-b0e43270148c\" (UID: \"fda52a79-9d85-4a62-935e-b0e43270148c\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146272 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run-ovn\") pod \"06549f00-ca55-4da7-9b6f-e3011d0550cc\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146296 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run\") pod \"06549f00-ca55-4da7-9b6f-e3011d0550cc\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146417 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-log-ovn\") pod \"06549f00-ca55-4da7-9b6f-e3011d0550cc\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146443 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-combined-ca-bundle\") pod \"4f5e6919-d274-40a5-b500-77f83781a452\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146477 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-additional-scripts\") pod \"06549f00-ca55-4da7-9b6f-e3011d0550cc\" (UID: \"06549f00-ca55-4da7-9b6f-e3011d0550cc\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146496 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ce95401-372b-4cad-b5b7-82d3575cd3da-operator-scripts\") pod \"7ce95401-372b-4cad-b5b7-82d3575cd3da\" (UID: \"7ce95401-372b-4cad-b5b7-82d3575cd3da\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146517 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-db-sync-config-data\") pod \"4f5e6919-d274-40a5-b500-77f83781a452\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-config-data\") pod \"4f5e6919-d274-40a5-b500-77f83781a452\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6rzh\" (UniqueName: \"kubernetes.io/projected/4f5e6919-d274-40a5-b500-77f83781a452-kube-api-access-g6rzh\") pod \"4f5e6919-d274-40a5-b500-77f83781a452\" (UID: \"4f5e6919-d274-40a5-b500-77f83781a452\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146682 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20b5439-344c-4a1b-a474-d9e7939e7e3e-operator-scripts\") pod \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\" (UID: \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146730 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fda52a79-9d85-4a62-935e-b0e43270148c-operator-scripts\") pod \"fda52a79-9d85-4a62-935e-b0e43270148c\" (UID: \"fda52a79-9d85-4a62-935e-b0e43270148c\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsldp\" (UniqueName: \"kubernetes.io/projected/b20b5439-344c-4a1b-a474-d9e7939e7e3e-kube-api-access-qsldp\") pod \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\" (UID: \"b20b5439-344c-4a1b-a474-d9e7939e7e3e\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.146825 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b2a6381-8062-465d-b129-ac4153f9305e-operator-scripts\") pod \"3b2a6381-8062-465d-b129-ac4153f9305e\" (UID: \"3b2a6381-8062-465d-b129-ac4153f9305e\") " Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.147200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "06549f00-ca55-4da7-9b6f-e3011d0550cc" (UID: "06549f00-ca55-4da7-9b6f-e3011d0550cc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.147274 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "06549f00-ca55-4da7-9b6f-e3011d0550cc" (UID: "06549f00-ca55-4da7-9b6f-e3011d0550cc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.147299 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run" (OuterVolumeSpecName: "var-run") pod "06549f00-ca55-4da7-9b6f-e3011d0550cc" (UID: "06549f00-ca55-4da7-9b6f-e3011d0550cc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.148264 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda52a79-9d85-4a62-935e-b0e43270148c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fda52a79-9d85-4a62-935e-b0e43270148c" (UID: "fda52a79-9d85-4a62-935e-b0e43270148c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.148598 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b20b5439-344c-4a1b-a474-d9e7939e7e3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b20b5439-344c-4a1b-a474-d9e7939e7e3e" (UID: "b20b5439-344c-4a1b-a474-d9e7939e7e3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.148602 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f206f49-f7ca-479b-843e-2377ddb90ce1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f206f49-f7ca-479b-843e-2377ddb90ce1" (UID: "8f206f49-f7ca-479b-843e-2377ddb90ce1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.148543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2a6381-8062-465d-b129-ac4153f9305e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b2a6381-8062-465d-b129-ac4153f9305e" (UID: "3b2a6381-8062-465d-b129-ac4153f9305e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.148912 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce95401-372b-4cad-b5b7-82d3575cd3da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ce95401-372b-4cad-b5b7-82d3575cd3da" (UID: "7ce95401-372b-4cad-b5b7-82d3575cd3da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.149837 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "06549f00-ca55-4da7-9b6f-e3011d0550cc" (UID: "06549f00-ca55-4da7-9b6f-e3011d0550cc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150116 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ce95401-372b-4cad-b5b7-82d3575cd3da-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150149 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b20b5439-344c-4a1b-a474-d9e7939e7e3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150163 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fda52a79-9d85-4a62-935e-b0e43270148c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150176 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b2a6381-8062-465d-b129-ac4153f9305e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150188 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f206f49-f7ca-479b-843e-2377ddb90ce1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150201 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150214 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150226 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06549f00-ca55-4da7-9b6f-e3011d0550cc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150240 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.150573 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-scripts" (OuterVolumeSpecName: "scripts") pod "06549f00-ca55-4da7-9b6f-e3011d0550cc" (UID: "06549f00-ca55-4da7-9b6f-e3011d0550cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.153607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda52a79-9d85-4a62-935e-b0e43270148c-kube-api-access-kntch" (OuterVolumeSpecName: "kube-api-access-kntch") pod "fda52a79-9d85-4a62-935e-b0e43270148c" (UID: "fda52a79-9d85-4a62-935e-b0e43270148c"). InnerVolumeSpecName "kube-api-access-kntch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.153772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4f5e6919-d274-40a5-b500-77f83781a452" (UID: "4f5e6919-d274-40a5-b500-77f83781a452"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.154126 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06549f00-ca55-4da7-9b6f-e3011d0550cc-kube-api-access-27tz4" (OuterVolumeSpecName: "kube-api-access-27tz4") pod "06549f00-ca55-4da7-9b6f-e3011d0550cc" (UID: "06549f00-ca55-4da7-9b6f-e3011d0550cc"). InnerVolumeSpecName "kube-api-access-27tz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.154249 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2a6381-8062-465d-b129-ac4153f9305e-kube-api-access-fmgqt" (OuterVolumeSpecName: "kube-api-access-fmgqt") pod "3b2a6381-8062-465d-b129-ac4153f9305e" (UID: "3b2a6381-8062-465d-b129-ac4153f9305e"). InnerVolumeSpecName "kube-api-access-fmgqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.154456 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20b5439-344c-4a1b-a474-d9e7939e7e3e-kube-api-access-qsldp" (OuterVolumeSpecName: "kube-api-access-qsldp") pod "b20b5439-344c-4a1b-a474-d9e7939e7e3e" (UID: "b20b5439-344c-4a1b-a474-d9e7939e7e3e"). InnerVolumeSpecName "kube-api-access-qsldp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.155392 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5e6919-d274-40a5-b500-77f83781a452-kube-api-access-g6rzh" (OuterVolumeSpecName: "kube-api-access-g6rzh") pod "4f5e6919-d274-40a5-b500-77f83781a452" (UID: "4f5e6919-d274-40a5-b500-77f83781a452"). InnerVolumeSpecName "kube-api-access-g6rzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.155663 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f206f49-f7ca-479b-843e-2377ddb90ce1-kube-api-access-qtgns" (OuterVolumeSpecName: "kube-api-access-qtgns") pod "8f206f49-f7ca-479b-843e-2377ddb90ce1" (UID: "8f206f49-f7ca-479b-843e-2377ddb90ce1"). InnerVolumeSpecName "kube-api-access-qtgns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.158149 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce95401-372b-4cad-b5b7-82d3575cd3da-kube-api-access-hbws4" (OuterVolumeSpecName: "kube-api-access-hbws4") pod "7ce95401-372b-4cad-b5b7-82d3575cd3da" (UID: "7ce95401-372b-4cad-b5b7-82d3575cd3da"). InnerVolumeSpecName "kube-api-access-hbws4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.179043 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f5e6919-d274-40a5-b500-77f83781a452" (UID: "4f5e6919-d274-40a5-b500-77f83781a452"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.204318 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-config-data" (OuterVolumeSpecName: "config-data") pod "4f5e6919-d274-40a5-b500-77f83781a452" (UID: "4f5e6919-d274-40a5-b500-77f83781a452"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251779 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251815 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251825 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5e6919-d274-40a5-b500-77f83781a452-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251834 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6rzh\" (UniqueName: \"kubernetes.io/projected/4f5e6919-d274-40a5-b500-77f83781a452-kube-api-access-g6rzh\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251847 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsldp\" (UniqueName: \"kubernetes.io/projected/b20b5439-344c-4a1b-a474-d9e7939e7e3e-kube-api-access-qsldp\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251857 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbws4\" (UniqueName: \"kubernetes.io/projected/7ce95401-372b-4cad-b5b7-82d3575cd3da-kube-api-access-hbws4\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251866 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27tz4\" (UniqueName: \"kubernetes.io/projected/06549f00-ca55-4da7-9b6f-e3011d0550cc-kube-api-access-27tz4\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251875 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06549f00-ca55-4da7-9b6f-e3011d0550cc-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251884 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmgqt\" (UniqueName: \"kubernetes.io/projected/3b2a6381-8062-465d-b129-ac4153f9305e-kube-api-access-fmgqt\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251892 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtgns\" (UniqueName: \"kubernetes.io/projected/8f206f49-f7ca-479b-843e-2377ddb90ce1-kube-api-access-qtgns\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.251900 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kntch\" (UniqueName: \"kubernetes.io/projected/fda52a79-9d85-4a62-935e-b0e43270148c-kube-api-access-kntch\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.728514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2mwx2" event={"ID":"4f5e6919-d274-40a5-b500-77f83781a452","Type":"ContainerDied","Data":"ef8c44476a0058307d619b177b69cd5590e4623d6ef1456acc93d4fea84dbee2"} Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.728552 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef8c44476a0058307d619b177b69cd5590e4623d6ef1456acc93d4fea84dbee2" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.728604 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2mwx2" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.734802 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bcl76" event={"ID":"8f206f49-f7ca-479b-843e-2377ddb90ce1","Type":"ContainerDied","Data":"13f7883e76db08be8500ccb6215879b552dd7fa13aa27e28fcc95cc3666b5a0a"} Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.734823 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bcl76" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.734841 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f7883e76db08be8500ccb6215879b552dd7fa13aa27e28fcc95cc3666b5a0a" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.736035 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-802e-account-create-update-ggb6g" event={"ID":"b20b5439-344c-4a1b-a474-d9e7939e7e3e","Type":"ContainerDied","Data":"e263bbec6fad3153cd46e863384a9f445285bb515f597a1bf937f78e95fcd238"} Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.736056 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e263bbec6fad3153cd46e863384a9f445285bb515f597a1bf937f78e95fcd238" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.736121 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-802e-account-create-update-ggb6g" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.741052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hfxg4-config-dlgtm" event={"ID":"06549f00-ca55-4da7-9b6f-e3011d0550cc","Type":"ContainerDied","Data":"e881d01064dac965789a0edcc055514c0206aa583f09c79f8ed521bdee4a8966"} Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.741123 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e881d01064dac965789a0edcc055514c0206aa583f09c79f8ed521bdee4a8966" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.741181 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hfxg4-config-dlgtm" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.744046 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7skjt" event={"ID":"7ce95401-372b-4cad-b5b7-82d3575cd3da","Type":"ContainerDied","Data":"7c92856096698fd0852aa19cbd3637630ee33117210060b1e52c57169fdcf196"} Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.744112 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c92856096698fd0852aa19cbd3637630ee33117210060b1e52c57169fdcf196" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.744061 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7skjt" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.747778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7666-account-create-update-zclc9" event={"ID":"fda52a79-9d85-4a62-935e-b0e43270148c","Type":"ContainerDied","Data":"69b1a3b466e52d0f46e3789e3e71efe59bdf5148634a19ab510dd7cef801cb01"} Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.747818 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b1a3b466e52d0f46e3789e3e71efe59bdf5148634a19ab510dd7cef801cb01" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.747878 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7666-account-create-update-zclc9" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.750641 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hn79k" event={"ID":"82676b1f-f4f3-42df-be67-62bbd3373116","Type":"ContainerStarted","Data":"82a8fc7ae9f37519fc7ebe027c4a5fa667aca583e33dc56a6abbeaea7b1ed252"} Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.752958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-20a8-account-create-update-8lpn8" event={"ID":"3b2a6381-8062-465d-b129-ac4153f9305e","Type":"ContainerDied","Data":"f82702ccc79292d5f4c4c1b6a060aa1184b4d4cda0c42e86c5157d8ca54e0366"} Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.753006 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82702ccc79292d5f4c4c1b6a060aa1184b4d4cda0c42e86c5157d8ca54e0366" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.753091 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-20a8-account-create-update-8lpn8" Dec 02 10:35:02 crc kubenswrapper[4813]: I1202 10:35:02.773022 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hn79k" podStartSLOduration=2.393268679 podStartE2EDuration="7.773003197s" podCreationTimestamp="2025-12-02 10:34:55 +0000 UTC" firstStartedPulling="2025-12-02 10:34:56.489314086 +0000 UTC m=+1620.684488388" lastFinishedPulling="2025-12-02 10:35:01.869048604 +0000 UTC m=+1626.064222906" observedRunningTime="2025-12-02 10:35:02.768462198 +0000 UTC m=+1626.963636520" watchObservedRunningTime="2025-12-02 10:35:02.773003197 +0000 UTC m=+1626.968177499" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.112841 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hfxg4-config-dlgtm"] Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.121150 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hfxg4-config-dlgtm"] Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.576911 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-6rqdd"] Dec 02 10:35:03 crc kubenswrapper[4813]: E1202 10:35:03.578984 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2a6381-8062-465d-b129-ac4153f9305e" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579011 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2a6381-8062-465d-b129-ac4153f9305e" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: E1202 10:35:03.579034 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06549f00-ca55-4da7-9b6f-e3011d0550cc" containerName="ovn-config" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579044 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="06549f00-ca55-4da7-9b6f-e3011d0550cc" containerName="ovn-config" Dec 02 10:35:03 crc kubenswrapper[4813]: E1202 10:35:03.579053 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5ddb27-75c7-48ac-9357-409e97f3020e" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579061 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5ddb27-75c7-48ac-9357-409e97f3020e" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: E1202 10:35:03.579095 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f206f49-f7ca-479b-843e-2377ddb90ce1" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579103 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f206f49-f7ca-479b-843e-2377ddb90ce1" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: E1202 10:35:03.579115 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20b5439-344c-4a1b-a474-d9e7939e7e3e" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579124 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20b5439-344c-4a1b-a474-d9e7939e7e3e" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: E1202 10:35:03.579140 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda52a79-9d85-4a62-935e-b0e43270148c" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579147 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda52a79-9d85-4a62-935e-b0e43270148c" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: E1202 10:35:03.579159 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5e6919-d274-40a5-b500-77f83781a452" containerName="glance-db-sync" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579168 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5e6919-d274-40a5-b500-77f83781a452" containerName="glance-db-sync" Dec 02 10:35:03 crc kubenswrapper[4813]: E1202 10:35:03.579183 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce95401-372b-4cad-b5b7-82d3575cd3da" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579191 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce95401-372b-4cad-b5b7-82d3575cd3da" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579394 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20b5439-344c-4a1b-a474-d9e7939e7e3e" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579409 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2a6381-8062-465d-b129-ac4153f9305e" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579453 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f206f49-f7ca-479b-843e-2377ddb90ce1" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579480 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce95401-372b-4cad-b5b7-82d3575cd3da" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579494 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5ddb27-75c7-48ac-9357-409e97f3020e" containerName="mariadb-database-create" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579513 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5e6919-d274-40a5-b500-77f83781a452" containerName="glance-db-sync" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579528 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda52a79-9d85-4a62-935e-b0e43270148c" containerName="mariadb-account-create-update" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.579545 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="06549f00-ca55-4da7-9b6f-e3011d0550cc" containerName="ovn-config" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.584907 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.615874 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-6rqdd"] Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.678042 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.678129 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfcll\" (UniqueName: \"kubernetes.io/projected/54abaad7-bb5c-440b-9b2e-36ba6684d88a-kube-api-access-gfcll\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.678212 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-config\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.678249 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.678308 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.780549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.780607 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfcll\" (UniqueName: \"kubernetes.io/projected/54abaad7-bb5c-440b-9b2e-36ba6684d88a-kube-api-access-gfcll\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.780683 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-config\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.780716 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.780751 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.782579 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.786503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-config\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.787562 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.790696 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.849506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfcll\" (UniqueName: \"kubernetes.io/projected/54abaad7-bb5c-440b-9b2e-36ba6684d88a-kube-api-access-gfcll\") pod \"dnsmasq-dns-54f9b7b8d9-6rqdd\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:03 crc kubenswrapper[4813]: I1202 10:35:03.932490 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:04 crc kubenswrapper[4813]: I1202 10:35:04.087405 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06549f00-ca55-4da7-9b6f-e3011d0550cc" path="/var/lib/kubelet/pods/06549f00-ca55-4da7-9b6f-e3011d0550cc/volumes" Dec 02 10:35:04 crc kubenswrapper[4813]: I1202 10:35:04.415721 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-6rqdd"] Dec 02 10:35:04 crc kubenswrapper[4813]: W1202 10:35:04.419374 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54abaad7_bb5c_440b_9b2e_36ba6684d88a.slice/crio-0b844a5248c315e9892d9a7eae2a2bfb52f0d1e6207e277c8c6390fb75dd5591 WatchSource:0}: Error finding container 0b844a5248c315e9892d9a7eae2a2bfb52f0d1e6207e277c8c6390fb75dd5591: Status 404 returned error can't find the container with id 0b844a5248c315e9892d9a7eae2a2bfb52f0d1e6207e277c8c6390fb75dd5591 Dec 02 10:35:04 crc kubenswrapper[4813]: I1202 10:35:04.770704 4813 generic.go:334] "Generic (PLEG): container finished" podID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" containerID="439f50ace2179343b891cb0d02f690878896a456f5171fc9814b168b42a40940" exitCode=0 Dec 02 10:35:04 crc kubenswrapper[4813]: I1202 10:35:04.770757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" event={"ID":"54abaad7-bb5c-440b-9b2e-36ba6684d88a","Type":"ContainerDied","Data":"439f50ace2179343b891cb0d02f690878896a456f5171fc9814b168b42a40940"} Dec 02 10:35:04 crc kubenswrapper[4813]: I1202 10:35:04.770798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" event={"ID":"54abaad7-bb5c-440b-9b2e-36ba6684d88a","Type":"ContainerStarted","Data":"0b844a5248c315e9892d9a7eae2a2bfb52f0d1e6207e277c8c6390fb75dd5591"} Dec 02 10:35:05 crc kubenswrapper[4813]: I1202 10:35:05.779063 4813 generic.go:334] "Generic (PLEG): container finished" podID="82676b1f-f4f3-42df-be67-62bbd3373116" containerID="82a8fc7ae9f37519fc7ebe027c4a5fa667aca583e33dc56a6abbeaea7b1ed252" exitCode=0 Dec 02 10:35:05 crc kubenswrapper[4813]: I1202 10:35:05.779105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hn79k" event={"ID":"82676b1f-f4f3-42df-be67-62bbd3373116","Type":"ContainerDied","Data":"82a8fc7ae9f37519fc7ebe027c4a5fa667aca583e33dc56a6abbeaea7b1ed252"} Dec 02 10:35:05 crc kubenswrapper[4813]: I1202 10:35:05.781503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" event={"ID":"54abaad7-bb5c-440b-9b2e-36ba6684d88a","Type":"ContainerStarted","Data":"d9ac97e257eeaea5cceccffec6ebf65e1fc5ea925aeda6eb90d03330b785d75c"} Dec 02 10:35:05 crc kubenswrapper[4813]: I1202 10:35:05.781602 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:05 crc kubenswrapper[4813]: I1202 10:35:05.820917 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" podStartSLOduration=2.820901031 podStartE2EDuration="2.820901031s" podCreationTimestamp="2025-12-02 10:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:05.820609782 +0000 UTC m=+1630.015784084" watchObservedRunningTime="2025-12-02 10:35:05.820901031 +0000 UTC m=+1630.016075333" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.105459 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hn79k" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.241188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-config-data\") pod \"82676b1f-f4f3-42df-be67-62bbd3373116\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.241440 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-combined-ca-bundle\") pod \"82676b1f-f4f3-42df-be67-62bbd3373116\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.243048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csgxk\" (UniqueName: \"kubernetes.io/projected/82676b1f-f4f3-42df-be67-62bbd3373116-kube-api-access-csgxk\") pod \"82676b1f-f4f3-42df-be67-62bbd3373116\" (UID: \"82676b1f-f4f3-42df-be67-62bbd3373116\") " Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.247448 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82676b1f-f4f3-42df-be67-62bbd3373116-kube-api-access-csgxk" (OuterVolumeSpecName: "kube-api-access-csgxk") pod "82676b1f-f4f3-42df-be67-62bbd3373116" (UID: "82676b1f-f4f3-42df-be67-62bbd3373116"). InnerVolumeSpecName "kube-api-access-csgxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.274793 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82676b1f-f4f3-42df-be67-62bbd3373116" (UID: "82676b1f-f4f3-42df-be67-62bbd3373116"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.290826 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-config-data" (OuterVolumeSpecName: "config-data") pod "82676b1f-f4f3-42df-be67-62bbd3373116" (UID: "82676b1f-f4f3-42df-be67-62bbd3373116"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.345533 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csgxk\" (UniqueName: \"kubernetes.io/projected/82676b1f-f4f3-42df-be67-62bbd3373116-kube-api-access-csgxk\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.345570 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.345584 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82676b1f-f4f3-42df-be67-62bbd3373116-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.801671 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hn79k" event={"ID":"82676b1f-f4f3-42df-be67-62bbd3373116","Type":"ContainerDied","Data":"42deb5b1814d39d99128a89f01aa8fd0ddbb37cbf99a1e10ef707bc5b99b5a36"} Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.801715 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42deb5b1814d39d99128a89f01aa8fd0ddbb37cbf99a1e10ef707bc5b99b5a36" Dec 02 10:35:07 crc kubenswrapper[4813]: I1202 10:35:07.801739 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hn79k" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.040136 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-6rqdd"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.040799 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" podUID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" containerName="dnsmasq-dns" containerID="cri-o://d9ac97e257eeaea5cceccffec6ebf65e1fc5ea925aeda6eb90d03330b785d75c" gracePeriod=10 Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.063213 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-csmg6"] Dec 02 10:35:08 crc kubenswrapper[4813]: E1202 10:35:08.063673 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82676b1f-f4f3-42df-be67-62bbd3373116" containerName="keystone-db-sync" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.063694 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="82676b1f-f4f3-42df-be67-62bbd3373116" containerName="keystone-db-sync" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.063892 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="82676b1f-f4f3-42df-be67-62bbd3373116" containerName="keystone-db-sync" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.064623 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.067487 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.067898 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.068097 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.068272 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.067995 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tsjhn" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.095457 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-csmg6"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.113098 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-5m7zq"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.114695 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.159811 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgkj6\" (UniqueName: \"kubernetes.io/projected/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-kube-api-access-pgkj6\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.159936 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-scripts\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.159981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-combined-ca-bundle\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.160030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-fernet-keys\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.160057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-config-data\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.160176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-credential-keys\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.175128 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-5m7zq"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262105 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7ww\" (UniqueName: \"kubernetes.io/projected/f19d806b-8c0b-4476-9d95-3db3562ab057-kube-api-access-pq7ww\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-config\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262267 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262344 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-scripts\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262397 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-combined-ca-bundle\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262461 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262511 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-fernet-keys\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-config-data\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-credential-keys\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262676 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgkj6\" (UniqueName: \"kubernetes.io/projected/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-kube-api-access-pgkj6\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.262715 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-dns-svc\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.269354 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-config-data\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.282025 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-combined-ca-bundle\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.282431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-fernet-keys\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.282728 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-scripts\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.291051 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2gtvp"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.291326 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-credential-keys\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.292136 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.296115 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.296357 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wj4pr" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.296576 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.307094 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2gtvp"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.348748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgkj6\" (UniqueName: \"kubernetes.io/projected/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-kube-api-access-pgkj6\") pod \"keystone-bootstrap-csmg6\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365226 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-combined-ca-bundle\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365285 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-scripts\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365333 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-dns-svc\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365351 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aeebb6e7-c26e-421b-ab9c-4b75379601bf-etc-machine-id\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365376 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwmbj\" (UniqueName: \"kubernetes.io/projected/aeebb6e7-c26e-421b-ab9c-4b75379601bf-kube-api-access-xwmbj\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365394 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-config-data\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7ww\" (UniqueName: \"kubernetes.io/projected/f19d806b-8c0b-4476-9d95-3db3562ab057-kube-api-access-pq7ww\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365437 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-db-sync-config-data\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-config\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.365590 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.366692 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.367609 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-config\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.367779 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.368009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-dns-svc\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.380178 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.382523 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.394583 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.395217 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.397564 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.400453 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7ww\" (UniqueName: \"kubernetes.io/projected/f19d806b-8c0b-4476-9d95-3db3562ab057-kube-api-access-pq7ww\") pod \"dnsmasq-dns-6546db6db7-5m7zq\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.401849 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kd7cw"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.414870 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.426679 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.427275 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fwfrt" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.427474 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.434921 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kd7cw"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.443156 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467323 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8mk\" (UniqueName: \"kubernetes.io/projected/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-kube-api-access-cq8mk\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467386 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467414 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-config-data\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467444 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-combined-ca-bundle\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467473 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-scripts\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467497 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-scripts\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467539 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aeebb6e7-c26e-421b-ab9c-4b75379601bf-etc-machine-id\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467565 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwmbj\" (UniqueName: \"kubernetes.io/projected/aeebb6e7-c26e-421b-ab9c-4b75379601bf-kube-api-access-xwmbj\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467613 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-config-data\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.467647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-db-sync-config-data\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.475052 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-db-sync-config-data\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.478228 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aeebb6e7-c26e-421b-ab9c-4b75379601bf-etc-machine-id\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.493890 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-combined-ca-bundle\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.507402 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-config-data\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.514651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-scripts\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.549695 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.602796 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwmbj\" (UniqueName: \"kubernetes.io/projected/aeebb6e7-c26e-421b-ab9c-4b75379601bf-kube-api-access-xwmbj\") pod \"cinder-db-sync-2gtvp\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.602881 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-hz75q"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629439 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629544 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq8mk\" (UniqueName: \"kubernetes.io/projected/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-kube-api-access-cq8mk\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629616 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-config-data\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-scripts\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629824 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629907 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-config\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.629995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-combined-ca-bundle\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.630057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtz9c\" (UniqueName: \"kubernetes.io/projected/41229fd1-12e2-41db-96d9-ac6349cf5756-kube-api-access-qtz9c\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.631213 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.649489 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.654673 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.660842 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.666310 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-scripts\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.674573 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.674846 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jpdgh" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.676829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.680619 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-config-data\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.736718 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4md7t"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.738064 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.738121 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-combined-ca-bundle\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.738173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtz9c\" (UniqueName: \"kubernetes.io/projected/41229fd1-12e2-41db-96d9-ac6349cf5756-kube-api-access-qtz9c\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.738332 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-config\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.749962 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.750291 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.750468 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zvndq" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.751437 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-combined-ca-bundle\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.754830 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-config\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.757387 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq8mk\" (UniqueName: \"kubernetes.io/projected/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-kube-api-access-cq8mk\") pod \"ceilometer-0\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.764049 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-5m7zq"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.764770 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.790614 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.796316 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtz9c\" (UniqueName: \"kubernetes.io/projected/41229fd1-12e2-41db-96d9-ac6349cf5756-kube-api-access-qtz9c\") pod \"neutron-db-sync-kd7cw\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.825818 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hz75q"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.849124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-config-data\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.849211 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-scripts\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.849249 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv8pk\" (UniqueName: \"kubernetes.io/projected/ceb09f23-052f-4207-8c2b-ea7736d76499-kube-api-access-mv8pk\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.849305 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-combined-ca-bundle\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.849360 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc7vr\" (UniqueName: \"kubernetes.io/projected/5b7b8eae-da35-4f54-83ec-6343ebedecfa-kube-api-access-pc7vr\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.849397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-db-sync-config-data\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.849420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-combined-ca-bundle\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.849444 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb09f23-052f-4207-8c2b-ea7736d76499-logs\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.859487 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.868640 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4md7t"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.884925 4813 generic.go:334] "Generic (PLEG): container finished" podID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" containerID="d9ac97e257eeaea5cceccffec6ebf65e1fc5ea925aeda6eb90d03330b785d75c" exitCode=0 Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.884965 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" event={"ID":"54abaad7-bb5c-440b-9b2e-36ba6684d88a","Type":"ContainerDied","Data":"d9ac97e257eeaea5cceccffec6ebf65e1fc5ea925aeda6eb90d03330b785d75c"} Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.884992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" event={"ID":"54abaad7-bb5c-440b-9b2e-36ba6684d88a","Type":"ContainerDied","Data":"0b844a5248c315e9892d9a7eae2a2bfb52f0d1e6207e277c8c6390fb75dd5591"} Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.885002 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b844a5248c315e9892d9a7eae2a2bfb52f0d1e6207e277c8c6390fb75dd5591" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.889142 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-m7dlx"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.892643 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.950944 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-config-data\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.951032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-scripts\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.951070 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv8pk\" (UniqueName: \"kubernetes.io/projected/ceb09f23-052f-4207-8c2b-ea7736d76499-kube-api-access-mv8pk\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.951144 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-combined-ca-bundle\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.951203 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc7vr\" (UniqueName: \"kubernetes.io/projected/5b7b8eae-da35-4f54-83ec-6343ebedecfa-kube-api-access-pc7vr\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.951244 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-db-sync-config-data\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.951269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-combined-ca-bundle\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.951294 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb09f23-052f-4207-8c2b-ea7736d76499-logs\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.951736 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb09f23-052f-4207-8c2b-ea7736d76499-logs\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.952444 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-m7dlx"] Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.962870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-db-sync-config-data\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.971971 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-combined-ca-bundle\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.989455 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.991481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-config-data\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.993560 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-scripts\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:08 crc kubenswrapper[4813]: I1202 10:35:08.994575 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-combined-ca-bundle\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.016512 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv8pk\" (UniqueName: \"kubernetes.io/projected/ceb09f23-052f-4207-8c2b-ea7736d76499-kube-api-access-mv8pk\") pod \"placement-db-sync-4md7t\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.017484 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc7vr\" (UniqueName: \"kubernetes.io/projected/5b7b8eae-da35-4f54-83ec-6343ebedecfa-kube-api-access-pc7vr\") pod \"barbican-db-sync-hz75q\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.049208 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.052432 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.052517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.052546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mm5w\" (UniqueName: \"kubernetes.io/projected/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-kube-api-access-2mm5w\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.052572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-config\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.052594 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.155497 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-dns-svc\") pod \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.155972 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfcll\" (UniqueName: \"kubernetes.io/projected/54abaad7-bb5c-440b-9b2e-36ba6684d88a-kube-api-access-gfcll\") pod \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.156154 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-sb\") pod \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.156174 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-nb\") pod \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.156242 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-config\") pod \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\" (UID: \"54abaad7-bb5c-440b-9b2e-36ba6684d88a\") " Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.156581 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.156677 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.156702 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-config\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.156725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mm5w\" (UniqueName: \"kubernetes.io/projected/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-kube-api-access-2mm5w\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.156757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.161737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.164582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-config\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.165270 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54abaad7-bb5c-440b-9b2e-36ba6684d88a-kube-api-access-gfcll" (OuterVolumeSpecName: "kube-api-access-gfcll") pod "54abaad7-bb5c-440b-9b2e-36ba6684d88a" (UID: "54abaad7-bb5c-440b-9b2e-36ba6684d88a"). InnerVolumeSpecName "kube-api-access-gfcll". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.165683 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.168535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.176511 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.193296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mm5w\" (UniqueName: \"kubernetes.io/projected/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-kube-api-access-2mm5w\") pod \"dnsmasq-dns-7987f74bbc-m7dlx\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.265159 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfcll\" (UniqueName: \"kubernetes.io/projected/54abaad7-bb5c-440b-9b2e-36ba6684d88a-kube-api-access-gfcll\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.301564 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.316058 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54abaad7-bb5c-440b-9b2e-36ba6684d88a" (UID: "54abaad7-bb5c-440b-9b2e-36ba6684d88a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.316708 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54abaad7-bb5c-440b-9b2e-36ba6684d88a" (UID: "54abaad7-bb5c-440b-9b2e-36ba6684d88a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.318751 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-config" (OuterVolumeSpecName: "config") pod "54abaad7-bb5c-440b-9b2e-36ba6684d88a" (UID: "54abaad7-bb5c-440b-9b2e-36ba6684d88a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.330772 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-csmg6"] Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.367182 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.367222 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.367236 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.376012 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54abaad7-bb5c-440b-9b2e-36ba6684d88a" (UID: "54abaad7-bb5c-440b-9b2e-36ba6684d88a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:09 crc kubenswrapper[4813]: W1202 10:35:09.384429 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd1136ca_00a0_4846_a2b7_32d71c5e8a06.slice/crio-96b1565f0a588be04b5baf147d711384cc2856c4315a2ad7a6f249df7c5c4a90 WatchSource:0}: Error finding container 96b1565f0a588be04b5baf147d711384cc2856c4315a2ad7a6f249df7c5c4a90: Status 404 returned error can't find the container with id 96b1565f0a588be04b5baf147d711384cc2856c4315a2ad7a6f249df7c5c4a90 Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.469276 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54abaad7-bb5c-440b-9b2e-36ba6684d88a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.503942 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-5m7zq"] Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.658846 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2gtvp"] Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.670944 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kd7cw"] Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.683534 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:09 crc kubenswrapper[4813]: W1202 10:35:09.687661 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41229fd1_12e2_41db_96d9_ac6349cf5756.slice/crio-caacd26303c14244f442b619b0544e9017836319460dc0eaedde19d728bec2e5 WatchSource:0}: Error finding container caacd26303c14244f442b619b0544e9017836319460dc0eaedde19d728bec2e5: Status 404 returned error can't find the container with id caacd26303c14244f442b619b0544e9017836319460dc0eaedde19d728bec2e5 Dec 02 10:35:09 crc kubenswrapper[4813]: W1202 10:35:09.699090 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0427545d_5ab6_45fd_9d6a_ec1614b54c2c.slice/crio-064ea4504f30760ee120f7f3994689ceaac817ef01a6e8c8bd3c5d72e5bb44af WatchSource:0}: Error finding container 064ea4504f30760ee120f7f3994689ceaac817ef01a6e8c8bd3c5d72e5bb44af: Status 404 returned error can't find the container with id 064ea4504f30760ee120f7f3994689ceaac817ef01a6e8c8bd3c5d72e5bb44af Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.828013 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hz75q"] Dec 02 10:35:09 crc kubenswrapper[4813]: W1202 10:35:09.832401 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b7b8eae_da35_4f54_83ec_6343ebedecfa.slice/crio-f33246d80a30d1bfcecc66ccc70700ef2112a973d9cb354673a4aa14444cd7ff WatchSource:0}: Error finding container f33246d80a30d1bfcecc66ccc70700ef2112a973d9cb354673a4aa14444cd7ff: Status 404 returned error can't find the container with id f33246d80a30d1bfcecc66ccc70700ef2112a973d9cb354673a4aa14444cd7ff Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.894755 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kd7cw" event={"ID":"41229fd1-12e2-41db-96d9-ac6349cf5756","Type":"ContainerStarted","Data":"caacd26303c14244f442b619b0544e9017836319460dc0eaedde19d728bec2e5"} Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.895584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" event={"ID":"f19d806b-8c0b-4476-9d95-3db3562ab057","Type":"ContainerStarted","Data":"5e1ee8b7fc3e60a415305677bef7defe882a2c30003113a65c448f89dbf7a51b"} Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.896429 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hz75q" event={"ID":"5b7b8eae-da35-4f54-83ec-6343ebedecfa","Type":"ContainerStarted","Data":"f33246d80a30d1bfcecc66ccc70700ef2112a973d9cb354673a4aa14444cd7ff"} Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.897176 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0427545d-5ab6-45fd-9d6a-ec1614b54c2c","Type":"ContainerStarted","Data":"064ea4504f30760ee120f7f3994689ceaac817ef01a6e8c8bd3c5d72e5bb44af"} Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.898033 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2gtvp" event={"ID":"aeebb6e7-c26e-421b-ab9c-4b75379601bf","Type":"ContainerStarted","Data":"7817fd4ba92417cfbdce30f8455b498201bc2713f02a089162974c4d65a763cd"} Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.908238 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-6rqdd" Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.908363 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-csmg6" event={"ID":"bd1136ca-00a0-4846-a2b7-32d71c5e8a06","Type":"ContainerStarted","Data":"96b1565f0a588be04b5baf147d711384cc2856c4315a2ad7a6f249df7c5c4a90"} Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.925837 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4md7t"] Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.950837 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-6rqdd"] Dec 02 10:35:09 crc kubenswrapper[4813]: I1202 10:35:09.958512 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-6rqdd"] Dec 02 10:35:10 crc kubenswrapper[4813]: W1202 10:35:10.075373 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c4eb3d5_1f3c_4beb_8170_9edbea23a2fd.slice/crio-2a79b49cc76fb2a97a6151bf73ff8aa38e44407e396cdbd63258b1166b86d294 WatchSource:0}: Error finding container 2a79b49cc76fb2a97a6151bf73ff8aa38e44407e396cdbd63258b1166b86d294: Status 404 returned error can't find the container with id 2a79b49cc76fb2a97a6151bf73ff8aa38e44407e396cdbd63258b1166b86d294 Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.081118 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" path="/var/lib/kubelet/pods/54abaad7-bb5c-440b-9b2e-36ba6684d88a/volumes" Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.081837 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-m7dlx"] Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.497054 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.963328 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kd7cw" event={"ID":"41229fd1-12e2-41db-96d9-ac6349cf5756","Type":"ContainerStarted","Data":"4f93d146244abe410fd11e1ff1304257526953280fff9afc549a43ebdbfda79a"} Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.972904 4813 generic.go:334] "Generic (PLEG): container finished" podID="f19d806b-8c0b-4476-9d95-3db3562ab057" containerID="afea725d13016f6d46be075cfb1f9d173d4498860415c05ffde902719cf2f727" exitCode=0 Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.973478 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" event={"ID":"f19d806b-8c0b-4476-9d95-3db3562ab057","Type":"ContainerDied","Data":"afea725d13016f6d46be075cfb1f9d173d4498860415c05ffde902719cf2f727"} Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.982307 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" containerID="824a85ed507d70a2390a662099c1de3113cc150fa2ea87dec978bfdd0ec78fb6" exitCode=0 Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.982378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" event={"ID":"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd","Type":"ContainerDied","Data":"824a85ed507d70a2390a662099c1de3113cc150fa2ea87dec978bfdd0ec78fb6"} Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.982406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" event={"ID":"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd","Type":"ContainerStarted","Data":"2a79b49cc76fb2a97a6151bf73ff8aa38e44407e396cdbd63258b1166b86d294"} Dec 02 10:35:10 crc kubenswrapper[4813]: I1202 10:35:10.993199 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4md7t" event={"ID":"ceb09f23-052f-4207-8c2b-ea7736d76499","Type":"ContainerStarted","Data":"f394b56135058d74b53f74f3dcf40ec2c615e6688db3d49f4815188e27cd800a"} Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.002771 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kd7cw" podStartSLOduration=3.002742961 podStartE2EDuration="3.002742961s" podCreationTimestamp="2025-12-02 10:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:10.982968321 +0000 UTC m=+1635.178142643" watchObservedRunningTime="2025-12-02 10:35:11.002742961 +0000 UTC m=+1635.197917273" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.017313 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-csmg6" event={"ID":"bd1136ca-00a0-4846-a2b7-32d71c5e8a06","Type":"ContainerStarted","Data":"2ad0dfd6d0e6d61dbd25c4a061ec329daed94f68a3c3469e9d38e5e9c35cb7a4"} Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.075710 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-csmg6" podStartSLOduration=3.075682828 podStartE2EDuration="3.075682828s" podCreationTimestamp="2025-12-02 10:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:11.070395768 +0000 UTC m=+1635.265570080" watchObservedRunningTime="2025-12-02 10:35:11.075682828 +0000 UTC m=+1635.270857130" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.558293 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.724482 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-sb\") pod \"f19d806b-8c0b-4476-9d95-3db3562ab057\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.725190 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7ww\" (UniqueName: \"kubernetes.io/projected/f19d806b-8c0b-4476-9d95-3db3562ab057-kube-api-access-pq7ww\") pod \"f19d806b-8c0b-4476-9d95-3db3562ab057\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.725283 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-config\") pod \"f19d806b-8c0b-4476-9d95-3db3562ab057\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.725857 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-nb\") pod \"f19d806b-8c0b-4476-9d95-3db3562ab057\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.725971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-dns-svc\") pod \"f19d806b-8c0b-4476-9d95-3db3562ab057\" (UID: \"f19d806b-8c0b-4476-9d95-3db3562ab057\") " Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.744675 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19d806b-8c0b-4476-9d95-3db3562ab057-kube-api-access-pq7ww" (OuterVolumeSpecName: "kube-api-access-pq7ww") pod "f19d806b-8c0b-4476-9d95-3db3562ab057" (UID: "f19d806b-8c0b-4476-9d95-3db3562ab057"). InnerVolumeSpecName "kube-api-access-pq7ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.748179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f19d806b-8c0b-4476-9d95-3db3562ab057" (UID: "f19d806b-8c0b-4476-9d95-3db3562ab057"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.748365 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-config" (OuterVolumeSpecName: "config") pod "f19d806b-8c0b-4476-9d95-3db3562ab057" (UID: "f19d806b-8c0b-4476-9d95-3db3562ab057"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.763726 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f19d806b-8c0b-4476-9d95-3db3562ab057" (UID: "f19d806b-8c0b-4476-9d95-3db3562ab057"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.769169 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f19d806b-8c0b-4476-9d95-3db3562ab057" (UID: "f19d806b-8c0b-4476-9d95-3db3562ab057"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.827534 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7ww\" (UniqueName: \"kubernetes.io/projected/f19d806b-8c0b-4476-9d95-3db3562ab057-kube-api-access-pq7ww\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.828805 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.828827 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.828838 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:11 crc kubenswrapper[4813]: I1202 10:35:11.828847 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f19d806b-8c0b-4476-9d95-3db3562ab057-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:12 crc kubenswrapper[4813]: I1202 10:35:12.030712 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" event={"ID":"f19d806b-8c0b-4476-9d95-3db3562ab057","Type":"ContainerDied","Data":"5e1ee8b7fc3e60a415305677bef7defe882a2c30003113a65c448f89dbf7a51b"} Dec 02 10:35:12 crc kubenswrapper[4813]: I1202 10:35:12.030734 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-5m7zq" Dec 02 10:35:12 crc kubenswrapper[4813]: I1202 10:35:12.030794 4813 scope.go:117] "RemoveContainer" containerID="afea725d13016f6d46be075cfb1f9d173d4498860415c05ffde902719cf2f727" Dec 02 10:35:12 crc kubenswrapper[4813]: I1202 10:35:12.037818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" event={"ID":"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd","Type":"ContainerStarted","Data":"61ae236aa9d2ec32ce4dc321a2ff7357989c540d24008f52547aaa9bf7ca455c"} Dec 02 10:35:12 crc kubenswrapper[4813]: I1202 10:35:12.038315 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:12 crc kubenswrapper[4813]: I1202 10:35:12.055358 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" podStartSLOduration=4.055302896 podStartE2EDuration="4.055302896s" podCreationTimestamp="2025-12-02 10:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:12.053591867 +0000 UTC m=+1636.248766169" watchObservedRunningTime="2025-12-02 10:35:12.055302896 +0000 UTC m=+1636.250477198" Dec 02 10:35:12 crc kubenswrapper[4813]: I1202 10:35:12.116135 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-5m7zq"] Dec 02 10:35:12 crc kubenswrapper[4813]: I1202 10:35:12.125472 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-5m7zq"] Dec 02 10:35:13 crc kubenswrapper[4813]: I1202 10:35:13.068163 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:35:13 crc kubenswrapper[4813]: E1202 10:35:13.068753 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:35:14 crc kubenswrapper[4813]: I1202 10:35:14.080631 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19d806b-8c0b-4476-9d95-3db3562ab057" path="/var/lib/kubelet/pods/f19d806b-8c0b-4476-9d95-3db3562ab057/volumes" Dec 02 10:35:16 crc kubenswrapper[4813]: I1202 10:35:16.098368 4813 generic.go:334] "Generic (PLEG): container finished" podID="bd1136ca-00a0-4846-a2b7-32d71c5e8a06" containerID="2ad0dfd6d0e6d61dbd25c4a061ec329daed94f68a3c3469e9d38e5e9c35cb7a4" exitCode=0 Dec 02 10:35:16 crc kubenswrapper[4813]: I1202 10:35:16.098872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-csmg6" event={"ID":"bd1136ca-00a0-4846-a2b7-32d71c5e8a06","Type":"ContainerDied","Data":"2ad0dfd6d0e6d61dbd25c4a061ec329daed94f68a3c3469e9d38e5e9c35cb7a4"} Dec 02 10:35:19 crc kubenswrapper[4813]: I1202 10:35:19.303745 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:35:19 crc kubenswrapper[4813]: I1202 10:35:19.367778 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-p7898"] Dec 02 10:35:19 crc kubenswrapper[4813]: I1202 10:35:19.368044 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="dnsmasq-dns" containerID="cri-o://970a8636ab28dd4f4e87d458526aae325fd2b6140fc5dd85ec5b430dee52b282" gracePeriod=10 Dec 02 10:35:21 crc kubenswrapper[4813]: I1202 10:35:21.148817 4813 generic.go:334] "Generic (PLEG): container finished" podID="608bffa9-2843-4086-a271-a1ffb445b00a" containerID="970a8636ab28dd4f4e87d458526aae325fd2b6140fc5dd85ec5b430dee52b282" exitCode=0 Dec 02 10:35:21 crc kubenswrapper[4813]: I1202 10:35:21.148906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" event={"ID":"608bffa9-2843-4086-a271-a1ffb445b00a","Type":"ContainerDied","Data":"970a8636ab28dd4f4e87d458526aae325fd2b6140fc5dd85ec5b430dee52b282"} Dec 02 10:35:24 crc kubenswrapper[4813]: I1202 10:35:24.068365 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:35:24 crc kubenswrapper[4813]: E1202 10:35:24.069096 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:35:25 crc kubenswrapper[4813]: E1202 10:35:25.934557 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 02 10:35:25 crc kubenswrapper[4813]: E1202 10:35:25.935209 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ch55fh5bfh555h68dh548hdfh579hdbh55ch6h5bbh74h5f9h9dhfchb7h558h5fch599h85h65dhbfh676h5c9h6dh677h666h559h77h5cfh85q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cq8mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0427545d-5ab6-45fd-9d6a-ec1614b54c2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:35:25 crc kubenswrapper[4813]: I1202 10:35:25.952832 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.007922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgkj6\" (UniqueName: \"kubernetes.io/projected/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-kube-api-access-pgkj6\") pod \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.008057 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-scripts\") pod \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.008109 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-combined-ca-bundle\") pod \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.008203 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-fernet-keys\") pod \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.008342 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-credential-keys\") pod \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.008385 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-config-data\") pod \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\" (UID: \"bd1136ca-00a0-4846-a2b7-32d71c5e8a06\") " Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.019176 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-kube-api-access-pgkj6" (OuterVolumeSpecName: "kube-api-access-pgkj6") pod "bd1136ca-00a0-4846-a2b7-32d71c5e8a06" (UID: "bd1136ca-00a0-4846-a2b7-32d71c5e8a06"). InnerVolumeSpecName "kube-api-access-pgkj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.020714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-scripts" (OuterVolumeSpecName: "scripts") pod "bd1136ca-00a0-4846-a2b7-32d71c5e8a06" (UID: "bd1136ca-00a0-4846-a2b7-32d71c5e8a06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.020792 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bd1136ca-00a0-4846-a2b7-32d71c5e8a06" (UID: "bd1136ca-00a0-4846-a2b7-32d71c5e8a06"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.022702 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bd1136ca-00a0-4846-a2b7-32d71c5e8a06" (UID: "bd1136ca-00a0-4846-a2b7-32d71c5e8a06"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.044655 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-config-data" (OuterVolumeSpecName: "config-data") pod "bd1136ca-00a0-4846-a2b7-32d71c5e8a06" (UID: "bd1136ca-00a0-4846-a2b7-32d71c5e8a06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.063117 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd1136ca-00a0-4846-a2b7-32d71c5e8a06" (UID: "bd1136ca-00a0-4846-a2b7-32d71c5e8a06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.111573 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.111606 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.111620 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.111631 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgkj6\" (UniqueName: \"kubernetes.io/projected/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-kube-api-access-pgkj6\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.111642 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.111652 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1136ca-00a0-4846-a2b7-32d71c5e8a06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.193010 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-csmg6" event={"ID":"bd1136ca-00a0-4846-a2b7-32d71c5e8a06","Type":"ContainerDied","Data":"96b1565f0a588be04b5baf147d711384cc2856c4315a2ad7a6f249df7c5c4a90"} Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.193058 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96b1565f0a588be04b5baf147d711384cc2856c4315a2ad7a6f249df7c5c4a90" Dec 02 10:35:26 crc kubenswrapper[4813]: I1202 10:35:26.193162 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-csmg6" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.035167 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-csmg6"] Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.045247 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-csmg6"] Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.132218 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d585l"] Dec 02 10:35:27 crc kubenswrapper[4813]: E1202 10:35:27.132592 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" containerName="init" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.132613 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" containerName="init" Dec 02 10:35:27 crc kubenswrapper[4813]: E1202 10:35:27.132637 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1136ca-00a0-4846-a2b7-32d71c5e8a06" containerName="keystone-bootstrap" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.132645 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1136ca-00a0-4846-a2b7-32d71c5e8a06" containerName="keystone-bootstrap" Dec 02 10:35:27 crc kubenswrapper[4813]: E1202 10:35:27.132655 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19d806b-8c0b-4476-9d95-3db3562ab057" containerName="init" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.132660 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19d806b-8c0b-4476-9d95-3db3562ab057" containerName="init" Dec 02 10:35:27 crc kubenswrapper[4813]: E1202 10:35:27.132679 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" containerName="dnsmasq-dns" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.132686 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" containerName="dnsmasq-dns" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.132874 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19d806b-8c0b-4476-9d95-3db3562ab057" containerName="init" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.132902 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1136ca-00a0-4846-a2b7-32d71c5e8a06" containerName="keystone-bootstrap" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.132919 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="54abaad7-bb5c-440b-9b2e-36ba6684d88a" containerName="dnsmasq-dns" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.133661 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.138679 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.138853 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tsjhn" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.139189 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.139212 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.139291 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.159919 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d585l"] Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.238259 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-combined-ca-bundle\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.238333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-fernet-keys\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.238353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-credential-keys\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.238558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6phl\" (UniqueName: \"kubernetes.io/projected/aad57710-e572-4e03-8f87-6770c28d8c0c-kube-api-access-m6phl\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.238648 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-scripts\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.238715 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-config-data\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.340045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6phl\" (UniqueName: \"kubernetes.io/projected/aad57710-e572-4e03-8f87-6770c28d8c0c-kube-api-access-m6phl\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.340117 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-scripts\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.340174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-config-data\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.340228 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-combined-ca-bundle\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.340306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-fernet-keys\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.340345 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-credential-keys\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.344409 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-credential-keys\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.344617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-combined-ca-bundle\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.345117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-fernet-keys\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.352495 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-scripts\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.363026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6phl\" (UniqueName: \"kubernetes.io/projected/aad57710-e572-4e03-8f87-6770c28d8c0c-kube-api-access-m6phl\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.364652 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-config-data\") pod \"keystone-bootstrap-d585l\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.465436 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:27 crc kubenswrapper[4813]: I1202 10:35:27.904391 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Dec 02 10:35:28 crc kubenswrapper[4813]: I1202 10:35:28.078132 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1136ca-00a0-4846-a2b7-32d71c5e8a06" path="/var/lib/kubelet/pods/bd1136ca-00a0-4846-a2b7-32d71c5e8a06/volumes" Dec 02 10:35:32 crc kubenswrapper[4813]: I1202 10:35:32.905334 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.551745 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.695726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-nb\") pod \"608bffa9-2843-4086-a271-a1ffb445b00a\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.695766 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-sb\") pod \"608bffa9-2843-4086-a271-a1ffb445b00a\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.695789 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-config\") pod \"608bffa9-2843-4086-a271-a1ffb445b00a\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.695865 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9m8b\" (UniqueName: \"kubernetes.io/projected/608bffa9-2843-4086-a271-a1ffb445b00a-kube-api-access-j9m8b\") pod \"608bffa9-2843-4086-a271-a1ffb445b00a\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.695915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-dns-svc\") pod \"608bffa9-2843-4086-a271-a1ffb445b00a\" (UID: \"608bffa9-2843-4086-a271-a1ffb445b00a\") " Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.703795 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608bffa9-2843-4086-a271-a1ffb445b00a-kube-api-access-j9m8b" (OuterVolumeSpecName: "kube-api-access-j9m8b") pod "608bffa9-2843-4086-a271-a1ffb445b00a" (UID: "608bffa9-2843-4086-a271-a1ffb445b00a"). InnerVolumeSpecName "kube-api-access-j9m8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.738233 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "608bffa9-2843-4086-a271-a1ffb445b00a" (UID: "608bffa9-2843-4086-a271-a1ffb445b00a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.752987 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "608bffa9-2843-4086-a271-a1ffb445b00a" (UID: "608bffa9-2843-4086-a271-a1ffb445b00a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.763330 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-config" (OuterVolumeSpecName: "config") pod "608bffa9-2843-4086-a271-a1ffb445b00a" (UID: "608bffa9-2843-4086-a271-a1ffb445b00a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.763872 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "608bffa9-2843-4086-a271-a1ffb445b00a" (UID: "608bffa9-2843-4086-a271-a1ffb445b00a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.798108 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.798147 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.798159 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.798172 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9m8b\" (UniqueName: \"kubernetes.io/projected/608bffa9-2843-4086-a271-a1ffb445b00a-kube-api-access-j9m8b\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:34 crc kubenswrapper[4813]: I1202 10:35:34.798187 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/608bffa9-2843-4086-a271-a1ffb445b00a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:35 crc kubenswrapper[4813]: E1202 10:35:35.028385 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 02 10:35:35 crc kubenswrapper[4813]: E1202 10:35:35.028578 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pc7vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-hz75q_openstack(5b7b8eae-da35-4f54-83ec-6343ebedecfa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:35:35 crc kubenswrapper[4813]: E1202 10:35:35.029791 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-hz75q" podUID="5b7b8eae-da35-4f54-83ec-6343ebedecfa" Dec 02 10:35:35 crc kubenswrapper[4813]: I1202 10:35:35.276618 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" event={"ID":"608bffa9-2843-4086-a271-a1ffb445b00a","Type":"ContainerDied","Data":"c460490e73d1eb8ee8817939df16d6152e3cf0cf7d2b161165c90ede6a28ee9f"} Dec 02 10:35:35 crc kubenswrapper[4813]: I1202 10:35:35.276688 4813 scope.go:117] "RemoveContainer" containerID="970a8636ab28dd4f4e87d458526aae325fd2b6140fc5dd85ec5b430dee52b282" Dec 02 10:35:35 crc kubenswrapper[4813]: I1202 10:35:35.276653 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" Dec 02 10:35:35 crc kubenswrapper[4813]: E1202 10:35:35.278556 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-hz75q" podUID="5b7b8eae-da35-4f54-83ec-6343ebedecfa" Dec 02 10:35:35 crc kubenswrapper[4813]: I1202 10:35:35.326651 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-p7898"] Dec 02 10:35:35 crc kubenswrapper[4813]: I1202 10:35:35.334203 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-p7898"] Dec 02 10:35:36 crc kubenswrapper[4813]: I1202 10:35:36.078307 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" path="/var/lib/kubelet/pods/608bffa9-2843-4086-a271-a1ffb445b00a/volumes" Dec 02 10:35:37 crc kubenswrapper[4813]: I1202 10:35:37.449018 4813 scope.go:117] "RemoveContainer" containerID="6ed282d37298592386b882dfb357d7f8cb68e118e33252c0e033d13c06667601" Dec 02 10:35:37 crc kubenswrapper[4813]: E1202 10:35:37.474203 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 02 10:35:37 crc kubenswrapper[4813]: E1202 10:35:37.474379 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwmbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2gtvp_openstack(aeebb6e7-c26e-421b-ab9c-4b75379601bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 10:35:37 crc kubenswrapper[4813]: E1202 10:35:37.475833 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2gtvp" podUID="aeebb6e7-c26e-421b-ab9c-4b75379601bf" Dec 02 10:35:37 crc kubenswrapper[4813]: I1202 10:35:37.871439 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d585l"] Dec 02 10:35:37 crc kubenswrapper[4813]: I1202 10:35:37.906083 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-p7898" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Dec 02 10:35:38 crc kubenswrapper[4813]: I1202 10:35:38.303485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d585l" event={"ID":"aad57710-e572-4e03-8f87-6770c28d8c0c","Type":"ContainerStarted","Data":"ec1fec237a6bb33a978a602c60c954955ac9da65a2ad772d575fbfb1cf63765d"} Dec 02 10:35:38 crc kubenswrapper[4813]: E1202 10:35:38.305596 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2gtvp" podUID="aeebb6e7-c26e-421b-ab9c-4b75379601bf" Dec 02 10:35:39 crc kubenswrapper[4813]: I1202 10:35:39.067837 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:35:39 crc kubenswrapper[4813]: E1202 10:35:39.068127 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:35:39 crc kubenswrapper[4813]: I1202 10:35:39.312793 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d585l" event={"ID":"aad57710-e572-4e03-8f87-6770c28d8c0c","Type":"ContainerStarted","Data":"75680c4de951d46bc51e1427ef02d21bd99921c4fc734cd6967d1b26ff5fd85e"} Dec 02 10:35:39 crc kubenswrapper[4813]: I1202 10:35:39.316623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4md7t" event={"ID":"ceb09f23-052f-4207-8c2b-ea7736d76499","Type":"ContainerStarted","Data":"5d878ce7034685a3374806923d655a3007d562a1681f1713010cc1b8c938a0da"} Dec 02 10:35:39 crc kubenswrapper[4813]: I1202 10:35:39.318610 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0427545d-5ab6-45fd-9d6a-ec1614b54c2c","Type":"ContainerStarted","Data":"a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989"} Dec 02 10:35:39 crc kubenswrapper[4813]: I1202 10:35:39.338512 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d585l" podStartSLOduration=12.338483788 podStartE2EDuration="12.338483788s" podCreationTimestamp="2025-12-02 10:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:39.329453062 +0000 UTC m=+1663.524627364" watchObservedRunningTime="2025-12-02 10:35:39.338483788 +0000 UTC m=+1663.533658100" Dec 02 10:35:39 crc kubenswrapper[4813]: I1202 10:35:39.358034 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4md7t" podStartSLOduration=3.860200457 podStartE2EDuration="31.358012391s" podCreationTimestamp="2025-12-02 10:35:08 +0000 UTC" firstStartedPulling="2025-12-02 10:35:09.934587524 +0000 UTC m=+1634.129761826" lastFinishedPulling="2025-12-02 10:35:37.432399458 +0000 UTC m=+1661.627573760" observedRunningTime="2025-12-02 10:35:39.350100527 +0000 UTC m=+1663.545274829" watchObservedRunningTime="2025-12-02 10:35:39.358012391 +0000 UTC m=+1663.553186693" Dec 02 10:35:42 crc kubenswrapper[4813]: I1202 10:35:42.345798 4813 generic.go:334] "Generic (PLEG): container finished" podID="ceb09f23-052f-4207-8c2b-ea7736d76499" containerID="5d878ce7034685a3374806923d655a3007d562a1681f1713010cc1b8c938a0da" exitCode=0 Dec 02 10:35:42 crc kubenswrapper[4813]: I1202 10:35:42.345866 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4md7t" event={"ID":"ceb09f23-052f-4207-8c2b-ea7736d76499","Type":"ContainerDied","Data":"5d878ce7034685a3374806923d655a3007d562a1681f1713010cc1b8c938a0da"} Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.699673 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.859567 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-combined-ca-bundle\") pod \"ceb09f23-052f-4207-8c2b-ea7736d76499\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.859972 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-scripts\") pod \"ceb09f23-052f-4207-8c2b-ea7736d76499\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.860092 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb09f23-052f-4207-8c2b-ea7736d76499-logs\") pod \"ceb09f23-052f-4207-8c2b-ea7736d76499\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.860124 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv8pk\" (UniqueName: \"kubernetes.io/projected/ceb09f23-052f-4207-8c2b-ea7736d76499-kube-api-access-mv8pk\") pod \"ceb09f23-052f-4207-8c2b-ea7736d76499\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.860193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-config-data\") pod \"ceb09f23-052f-4207-8c2b-ea7736d76499\" (UID: \"ceb09f23-052f-4207-8c2b-ea7736d76499\") " Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.861530 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb09f23-052f-4207-8c2b-ea7736d76499-logs" (OuterVolumeSpecName: "logs") pod "ceb09f23-052f-4207-8c2b-ea7736d76499" (UID: "ceb09f23-052f-4207-8c2b-ea7736d76499"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.865440 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-scripts" (OuterVolumeSpecName: "scripts") pod "ceb09f23-052f-4207-8c2b-ea7736d76499" (UID: "ceb09f23-052f-4207-8c2b-ea7736d76499"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.865537 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb09f23-052f-4207-8c2b-ea7736d76499-kube-api-access-mv8pk" (OuterVolumeSpecName: "kube-api-access-mv8pk") pod "ceb09f23-052f-4207-8c2b-ea7736d76499" (UID: "ceb09f23-052f-4207-8c2b-ea7736d76499"). InnerVolumeSpecName "kube-api-access-mv8pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.886676 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-config-data" (OuterVolumeSpecName: "config-data") pod "ceb09f23-052f-4207-8c2b-ea7736d76499" (UID: "ceb09f23-052f-4207-8c2b-ea7736d76499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.897239 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceb09f23-052f-4207-8c2b-ea7736d76499" (UID: "ceb09f23-052f-4207-8c2b-ea7736d76499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.962397 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb09f23-052f-4207-8c2b-ea7736d76499-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.962430 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv8pk\" (UniqueName: \"kubernetes.io/projected/ceb09f23-052f-4207-8c2b-ea7736d76499-kube-api-access-mv8pk\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.962441 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.962450 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:43 crc kubenswrapper[4813]: I1202 10:35:43.962458 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb09f23-052f-4207-8c2b-ea7736d76499-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.367645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0427545d-5ab6-45fd-9d6a-ec1614b54c2c","Type":"ContainerStarted","Data":"d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331"} Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.370003 4813 generic.go:334] "Generic (PLEG): container finished" podID="aad57710-e572-4e03-8f87-6770c28d8c0c" containerID="75680c4de951d46bc51e1427ef02d21bd99921c4fc734cd6967d1b26ff5fd85e" exitCode=0 Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.370066 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d585l" event={"ID":"aad57710-e572-4e03-8f87-6770c28d8c0c","Type":"ContainerDied","Data":"75680c4de951d46bc51e1427ef02d21bd99921c4fc734cd6967d1b26ff5fd85e"} Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.376198 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4md7t" event={"ID":"ceb09f23-052f-4207-8c2b-ea7736d76499","Type":"ContainerDied","Data":"f394b56135058d74b53f74f3dcf40ec2c615e6688db3d49f4815188e27cd800a"} Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.376233 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f394b56135058d74b53f74f3dcf40ec2c615e6688db3d49f4815188e27cd800a" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.376300 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4md7t" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.461058 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-58f7b95df4-zpk5x"] Dec 02 10:35:44 crc kubenswrapper[4813]: E1202 10:35:44.463266 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="dnsmasq-dns" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.463293 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="dnsmasq-dns" Dec 02 10:35:44 crc kubenswrapper[4813]: E1202 10:35:44.463309 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="init" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.463315 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="init" Dec 02 10:35:44 crc kubenswrapper[4813]: E1202 10:35:44.463327 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb09f23-052f-4207-8c2b-ea7736d76499" containerName="placement-db-sync" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.463333 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb09f23-052f-4207-8c2b-ea7736d76499" containerName="placement-db-sync" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.463495 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb09f23-052f-4207-8c2b-ea7736d76499" containerName="placement-db-sync" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.463511 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="608bffa9-2843-4086-a271-a1ffb445b00a" containerName="dnsmasq-dns" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.468155 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.470055 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zvndq" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.470248 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.470298 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.471063 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58f7b95df4-zpk5x"] Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.473772 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.480530 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.572177 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-combined-ca-bundle\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.572258 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-config-data\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.572349 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-public-tls-certs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.572423 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-logs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.572451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-internal-tls-certs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.572477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pcpr\" (UniqueName: \"kubernetes.io/projected/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-kube-api-access-7pcpr\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.572549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-scripts\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.674042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-combined-ca-bundle\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.674444 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-config-data\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.674486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-public-tls-certs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.674525 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-logs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.674556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-internal-tls-certs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.674586 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pcpr\" (UniqueName: \"kubernetes.io/projected/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-kube-api-access-7pcpr\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.674621 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-scripts\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.675623 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-logs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.679028 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-public-tls-certs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.679099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-config-data\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.679169 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-internal-tls-certs\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.679475 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-scripts\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.680503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-combined-ca-bundle\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.711229 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pcpr\" (UniqueName: \"kubernetes.io/projected/0ab19173-c03c-4f46-8f2e-550ea7a70fd3-kube-api-access-7pcpr\") pod \"placement-58f7b95df4-zpk5x\" (UID: \"0ab19173-c03c-4f46-8f2e-550ea7a70fd3\") " pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:44 crc kubenswrapper[4813]: I1202 10:35:44.785954 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.227921 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58f7b95df4-zpk5x"] Dec 02 10:35:45 crc kubenswrapper[4813]: W1202 10:35:45.240432 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab19173_c03c_4f46_8f2e_550ea7a70fd3.slice/crio-5c380e85dd6ff7182ec8aca13575427b6e89929af1c1cf90dd449f6ddf7a93d2 WatchSource:0}: Error finding container 5c380e85dd6ff7182ec8aca13575427b6e89929af1c1cf90dd449f6ddf7a93d2: Status 404 returned error can't find the container with id 5c380e85dd6ff7182ec8aca13575427b6e89929af1c1cf90dd449f6ddf7a93d2 Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.388094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58f7b95df4-zpk5x" event={"ID":"0ab19173-c03c-4f46-8f2e-550ea7a70fd3","Type":"ContainerStarted","Data":"5c380e85dd6ff7182ec8aca13575427b6e89929af1c1cf90dd449f6ddf7a93d2"} Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.777866 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.909198 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6phl\" (UniqueName: \"kubernetes.io/projected/aad57710-e572-4e03-8f87-6770c28d8c0c-kube-api-access-m6phl\") pod \"aad57710-e572-4e03-8f87-6770c28d8c0c\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.909279 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-fernet-keys\") pod \"aad57710-e572-4e03-8f87-6770c28d8c0c\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.909340 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-config-data\") pod \"aad57710-e572-4e03-8f87-6770c28d8c0c\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.909460 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-scripts\") pod \"aad57710-e572-4e03-8f87-6770c28d8c0c\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.909488 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-credential-keys\") pod \"aad57710-e572-4e03-8f87-6770c28d8c0c\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.909530 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-combined-ca-bundle\") pod \"aad57710-e572-4e03-8f87-6770c28d8c0c\" (UID: \"aad57710-e572-4e03-8f87-6770c28d8c0c\") " Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.915091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad57710-e572-4e03-8f87-6770c28d8c0c-kube-api-access-m6phl" (OuterVolumeSpecName: "kube-api-access-m6phl") pod "aad57710-e572-4e03-8f87-6770c28d8c0c" (UID: "aad57710-e572-4e03-8f87-6770c28d8c0c"). InnerVolumeSpecName "kube-api-access-m6phl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.915154 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "aad57710-e572-4e03-8f87-6770c28d8c0c" (UID: "aad57710-e572-4e03-8f87-6770c28d8c0c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.915595 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-scripts" (OuterVolumeSpecName: "scripts") pod "aad57710-e572-4e03-8f87-6770c28d8c0c" (UID: "aad57710-e572-4e03-8f87-6770c28d8c0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.915974 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aad57710-e572-4e03-8f87-6770c28d8c0c" (UID: "aad57710-e572-4e03-8f87-6770c28d8c0c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.939245 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aad57710-e572-4e03-8f87-6770c28d8c0c" (UID: "aad57710-e572-4e03-8f87-6770c28d8c0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:45 crc kubenswrapper[4813]: I1202 10:35:45.940344 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-config-data" (OuterVolumeSpecName: "config-data") pod "aad57710-e572-4e03-8f87-6770c28d8c0c" (UID: "aad57710-e572-4e03-8f87-6770c28d8c0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.011335 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.011371 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.011382 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.011391 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6phl\" (UniqueName: \"kubernetes.io/projected/aad57710-e572-4e03-8f87-6770c28d8c0c-kube-api-access-m6phl\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.011400 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.011408 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad57710-e572-4e03-8f87-6770c28d8c0c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.398715 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d585l" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.398996 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d585l" event={"ID":"aad57710-e572-4e03-8f87-6770c28d8c0c","Type":"ContainerDied","Data":"ec1fec237a6bb33a978a602c60c954955ac9da65a2ad772d575fbfb1cf63765d"} Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.399124 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec1fec237a6bb33a978a602c60c954955ac9da65a2ad772d575fbfb1cf63765d" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.404604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58f7b95df4-zpk5x" event={"ID":"0ab19173-c03c-4f46-8f2e-550ea7a70fd3","Type":"ContainerStarted","Data":"b1adde3dc05a96047cf976245277fa4737cf9aa666310dff228e390605e054b1"} Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.404649 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58f7b95df4-zpk5x" event={"ID":"0ab19173-c03c-4f46-8f2e-550ea7a70fd3","Type":"ContainerStarted","Data":"58660473253edf0b2a5ae37727d0b5f3bedb6e2c99825ac8cebaa2dac1ff662d"} Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.405317 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.405355 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.406873 4813 generic.go:334] "Generic (PLEG): container finished" podID="41229fd1-12e2-41db-96d9-ac6349cf5756" containerID="4f93d146244abe410fd11e1ff1304257526953280fff9afc549a43ebdbfda79a" exitCode=0 Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.406911 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kd7cw" event={"ID":"41229fd1-12e2-41db-96d9-ac6349cf5756","Type":"ContainerDied","Data":"4f93d146244abe410fd11e1ff1304257526953280fff9afc549a43ebdbfda79a"} Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.443768 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-58f7b95df4-zpk5x" podStartSLOduration=2.443657486 podStartE2EDuration="2.443657486s" podCreationTimestamp="2025-12-02 10:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:46.440373133 +0000 UTC m=+1670.635547455" watchObservedRunningTime="2025-12-02 10:35:46.443657486 +0000 UTC m=+1670.638831788" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.532393 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-75b688f9cf-8gj5w"] Dec 02 10:35:46 crc kubenswrapper[4813]: E1202 10:35:46.532848 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad57710-e572-4e03-8f87-6770c28d8c0c" containerName="keystone-bootstrap" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.532872 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad57710-e572-4e03-8f87-6770c28d8c0c" containerName="keystone-bootstrap" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.533125 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad57710-e572-4e03-8f87-6770c28d8c0c" containerName="keystone-bootstrap" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.533798 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.537237 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tsjhn" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.541557 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.541649 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.541714 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.541567 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.542628 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.565142 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75b688f9cf-8gj5w"] Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.623867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-internal-tls-certs\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.623943 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-scripts\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.624014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdf4z\" (UniqueName: \"kubernetes.io/projected/6236fd33-cc67-443d-bb34-287b98d8ed72-kube-api-access-mdf4z\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.624137 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-config-data\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.624175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-fernet-keys\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.624194 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-combined-ca-bundle\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.624260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-credential-keys\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.624293 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-public-tls-certs\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.727139 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdf4z\" (UniqueName: \"kubernetes.io/projected/6236fd33-cc67-443d-bb34-287b98d8ed72-kube-api-access-mdf4z\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.727359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-config-data\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.727448 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-fernet-keys\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.727479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-combined-ca-bundle\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.727555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-credential-keys\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.727652 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-public-tls-certs\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.727783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-internal-tls-certs\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.727850 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-scripts\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.733532 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-credential-keys\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.733595 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-scripts\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.733624 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-internal-tls-certs\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.734535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-config-data\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.734798 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-public-tls-certs\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.736871 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-fernet-keys\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.738883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6236fd33-cc67-443d-bb34-287b98d8ed72-combined-ca-bundle\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.747984 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdf4z\" (UniqueName: \"kubernetes.io/projected/6236fd33-cc67-443d-bb34-287b98d8ed72-kube-api-access-mdf4z\") pod \"keystone-75b688f9cf-8gj5w\" (UID: \"6236fd33-cc67-443d-bb34-287b98d8ed72\") " pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:46 crc kubenswrapper[4813]: I1202 10:35:46.856028 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:47 crc kubenswrapper[4813]: I1202 10:35:47.289666 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75b688f9cf-8gj5w"] Dec 02 10:35:50 crc kubenswrapper[4813]: W1202 10:35:50.105267 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6236fd33_cc67_443d_bb34_287b98d8ed72.slice/crio-5e80e7f81a4e336b3f1a3b98a42a040c423e4f8850346d7c4665201b5d46e097 WatchSource:0}: Error finding container 5e80e7f81a4e336b3f1a3b98a42a040c423e4f8850346d7c4665201b5d46e097: Status 404 returned error can't find the container with id 5e80e7f81a4e336b3f1a3b98a42a040c423e4f8850346d7c4665201b5d46e097 Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.179327 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.289316 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-config\") pod \"41229fd1-12e2-41db-96d9-ac6349cf5756\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.289662 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtz9c\" (UniqueName: \"kubernetes.io/projected/41229fd1-12e2-41db-96d9-ac6349cf5756-kube-api-access-qtz9c\") pod \"41229fd1-12e2-41db-96d9-ac6349cf5756\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.289726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-combined-ca-bundle\") pod \"41229fd1-12e2-41db-96d9-ac6349cf5756\" (UID: \"41229fd1-12e2-41db-96d9-ac6349cf5756\") " Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.294694 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41229fd1-12e2-41db-96d9-ac6349cf5756-kube-api-access-qtz9c" (OuterVolumeSpecName: "kube-api-access-qtz9c") pod "41229fd1-12e2-41db-96d9-ac6349cf5756" (UID: "41229fd1-12e2-41db-96d9-ac6349cf5756"). InnerVolumeSpecName "kube-api-access-qtz9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.314576 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-config" (OuterVolumeSpecName: "config") pod "41229fd1-12e2-41db-96d9-ac6349cf5756" (UID: "41229fd1-12e2-41db-96d9-ac6349cf5756"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.316384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41229fd1-12e2-41db-96d9-ac6349cf5756" (UID: "41229fd1-12e2-41db-96d9-ac6349cf5756"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.391904 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.391951 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtz9c\" (UniqueName: \"kubernetes.io/projected/41229fd1-12e2-41db-96d9-ac6349cf5756-kube-api-access-qtz9c\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.392005 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41229fd1-12e2-41db-96d9-ac6349cf5756-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.447895 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kd7cw" event={"ID":"41229fd1-12e2-41db-96d9-ac6349cf5756","Type":"ContainerDied","Data":"caacd26303c14244f442b619b0544e9017836319460dc0eaedde19d728bec2e5"} Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.447932 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caacd26303c14244f442b619b0544e9017836319460dc0eaedde19d728bec2e5" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.447902 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kd7cw" Dec 02 10:35:50 crc kubenswrapper[4813]: I1202 10:35:50.449601 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75b688f9cf-8gj5w" event={"ID":"6236fd33-cc67-443d-bb34-287b98d8ed72","Type":"ContainerStarted","Data":"5e80e7f81a4e336b3f1a3b98a42a040c423e4f8850346d7c4665201b5d46e097"} Dec 02 10:35:50 crc kubenswrapper[4813]: E1202 10:35:50.783780 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.068173 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:35:51 crc kubenswrapper[4813]: E1202 10:35:51.068634 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.442874 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-6hgch"] Dec 02 10:35:51 crc kubenswrapper[4813]: E1202 10:35:51.443284 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41229fd1-12e2-41db-96d9-ac6349cf5756" containerName="neutron-db-sync" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.443324 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="41229fd1-12e2-41db-96d9-ac6349cf5756" containerName="neutron-db-sync" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.443529 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="41229fd1-12e2-41db-96d9-ac6349cf5756" containerName="neutron-db-sync" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.444382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.465126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0427545d-5ab6-45fd-9d6a-ec1614b54c2c","Type":"ContainerStarted","Data":"0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02"} Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.465352 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.465392 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="ceilometer-notification-agent" containerID="cri-o://a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989" gracePeriod=30 Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.465424 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="proxy-httpd" containerID="cri-o://0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02" gracePeriod=30 Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.465613 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="sg-core" containerID="cri-o://d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331" gracePeriod=30 Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.481343 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2gtvp" event={"ID":"aeebb6e7-c26e-421b-ab9c-4b75379601bf","Type":"ContainerStarted","Data":"1e9d4436ab510bb1c15e062d7d96769e04a47894d121b5ae8dae6533050f3016"} Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.483338 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75b688f9cf-8gj5w" event={"ID":"6236fd33-cc67-443d-bb34-287b98d8ed72","Type":"ContainerStarted","Data":"04475badab027348e52cc53e51b1d384fea0e36d4bdc415cbf368be9c3dfc1ed"} Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.483772 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.500992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hz75q" event={"ID":"5b7b8eae-da35-4f54-83ec-6343ebedecfa","Type":"ContainerStarted","Data":"c4bdf3c737d0bd9f7a539259286fff59738b25a59fcb2257d009f32579596237"} Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.505140 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-6hgch"] Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.562951 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2gtvp" podStartSLOduration=2.661978364 podStartE2EDuration="43.562928131s" podCreationTimestamp="2025-12-02 10:35:08 +0000 UTC" firstStartedPulling="2025-12-02 10:35:09.707354985 +0000 UTC m=+1633.902529287" lastFinishedPulling="2025-12-02 10:35:50.608304752 +0000 UTC m=+1674.803479054" observedRunningTime="2025-12-02 10:35:51.544440217 +0000 UTC m=+1675.739614519" watchObservedRunningTime="2025-12-02 10:35:51.562928131 +0000 UTC m=+1675.758102433" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.620416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.620464 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.620493 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-config\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.620551 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-dns-svc\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.620607 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vfqc\" (UniqueName: \"kubernetes.io/projected/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-kube-api-access-4vfqc\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.653281 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-hz75q" podStartSLOduration=2.867093107 podStartE2EDuration="43.653262381s" podCreationTimestamp="2025-12-02 10:35:08 +0000 UTC" firstStartedPulling="2025-12-02 10:35:09.8359957 +0000 UTC m=+1634.031170002" lastFinishedPulling="2025-12-02 10:35:50.622164974 +0000 UTC m=+1674.817339276" observedRunningTime="2025-12-02 10:35:51.651859751 +0000 UTC m=+1675.847034053" watchObservedRunningTime="2025-12-02 10:35:51.653262381 +0000 UTC m=+1675.848436683" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.682181 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-75b688f9cf-8gj5w" podStartSLOduration=5.68216271 podStartE2EDuration="5.68216271s" podCreationTimestamp="2025-12-02 10:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:51.678219078 +0000 UTC m=+1675.873393380" watchObservedRunningTime="2025-12-02 10:35:51.68216271 +0000 UTC m=+1675.877337022" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.722006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.722050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.722133 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-config\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.722180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-dns-svc\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.722226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vfqc\" (UniqueName: \"kubernetes.io/projected/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-kube-api-access-4vfqc\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.723019 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.741654 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.742030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-config\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.742105 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-dns-svc\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.753454 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vfqc\" (UniqueName: \"kubernetes.io/projected/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-kube-api-access-4vfqc\") pod \"dnsmasq-dns-7b946d459c-6hgch\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.771630 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.833783 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c87584f6d-77bdt"] Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.835635 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.840053 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.840218 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fwfrt" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.841108 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.843142 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.849996 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c87584f6d-77bdt"] Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.924606 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvfn\" (UniqueName: \"kubernetes.io/projected/368ae054-4a5a-4187-a078-e1db70e84741-kube-api-access-hmvfn\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.924818 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-ovndb-tls-certs\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.924966 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-httpd-config\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.925159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-combined-ca-bundle\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:51 crc kubenswrapper[4813]: I1202 10:35:51.925246 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-config\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.027270 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-httpd-config\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.027697 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-combined-ca-bundle\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.027729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-config\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.027803 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmvfn\" (UniqueName: \"kubernetes.io/projected/368ae054-4a5a-4187-a078-e1db70e84741-kube-api-access-hmvfn\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.027859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-ovndb-tls-certs\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.034983 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-combined-ca-bundle\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.036060 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-ovndb-tls-certs\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.039192 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-config\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.039768 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-httpd-config\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.055922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmvfn\" (UniqueName: \"kubernetes.io/projected/368ae054-4a5a-4187-a078-e1db70e84741-kube-api-access-hmvfn\") pod \"neutron-6c87584f6d-77bdt\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.252061 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.345635 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-6hgch"] Dec 02 10:35:52 crc kubenswrapper[4813]: W1202 10:35:52.361890 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeff4d2ad_314e_429f_a35d_fc47c9ca89a2.slice/crio-01b2329c9e78b774136a1e0383eafd60f48e6f4ffd0d9b6d0609bf90a48be383 WatchSource:0}: Error finding container 01b2329c9e78b774136a1e0383eafd60f48e6f4ffd0d9b6d0609bf90a48be383: Status 404 returned error can't find the container with id 01b2329c9e78b774136a1e0383eafd60f48e6f4ffd0d9b6d0609bf90a48be383 Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.536899 4813 generic.go:334] "Generic (PLEG): container finished" podID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerID="0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02" exitCode=0 Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.536936 4813 generic.go:334] "Generic (PLEG): container finished" podID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerID="d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331" exitCode=2 Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.536998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0427545d-5ab6-45fd-9d6a-ec1614b54c2c","Type":"ContainerDied","Data":"0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02"} Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.537029 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0427545d-5ab6-45fd-9d6a-ec1614b54c2c","Type":"ContainerDied","Data":"d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331"} Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.539567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" event={"ID":"eff4d2ad-314e-429f-a35d-fc47c9ca89a2","Type":"ContainerStarted","Data":"01b2329c9e78b774136a1e0383eafd60f48e6f4ffd0d9b6d0609bf90a48be383"} Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.907640 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c87584f6d-77bdt"] Dec 02 10:35:52 crc kubenswrapper[4813]: I1202 10:35:52.938494 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047190 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-run-httpd\") pod \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047448 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq8mk\" (UniqueName: \"kubernetes.io/projected/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-kube-api-access-cq8mk\") pod \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047499 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-combined-ca-bundle\") pod \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047520 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-sg-core-conf-yaml\") pod \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047542 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-log-httpd\") pod \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047577 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-config-data\") pod \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047615 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-scripts\") pod \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\" (UID: \"0427545d-5ab6-45fd-9d6a-ec1614b54c2c\") " Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047779 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0427545d-5ab6-45fd-9d6a-ec1614b54c2c" (UID: "0427545d-5ab6-45fd-9d6a-ec1614b54c2c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.047965 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.048270 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0427545d-5ab6-45fd-9d6a-ec1614b54c2c" (UID: "0427545d-5ab6-45fd-9d6a-ec1614b54c2c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.053607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-scripts" (OuterVolumeSpecName: "scripts") pod "0427545d-5ab6-45fd-9d6a-ec1614b54c2c" (UID: "0427545d-5ab6-45fd-9d6a-ec1614b54c2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.054222 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-kube-api-access-cq8mk" (OuterVolumeSpecName: "kube-api-access-cq8mk") pod "0427545d-5ab6-45fd-9d6a-ec1614b54c2c" (UID: "0427545d-5ab6-45fd-9d6a-ec1614b54c2c"). InnerVolumeSpecName "kube-api-access-cq8mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.091261 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0427545d-5ab6-45fd-9d6a-ec1614b54c2c" (UID: "0427545d-5ab6-45fd-9d6a-ec1614b54c2c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.118356 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0427545d-5ab6-45fd-9d6a-ec1614b54c2c" (UID: "0427545d-5ab6-45fd-9d6a-ec1614b54c2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.131782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-config-data" (OuterVolumeSpecName: "config-data") pod "0427545d-5ab6-45fd-9d6a-ec1614b54c2c" (UID: "0427545d-5ab6-45fd-9d6a-ec1614b54c2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.153188 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq8mk\" (UniqueName: \"kubernetes.io/projected/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-kube-api-access-cq8mk\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.153232 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.153245 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.153258 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.153272 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.153284 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0427545d-5ab6-45fd-9d6a-ec1614b54c2c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.547869 4813 generic.go:334] "Generic (PLEG): container finished" podID="5b7b8eae-da35-4f54-83ec-6343ebedecfa" containerID="c4bdf3c737d0bd9f7a539259286fff59738b25a59fcb2257d009f32579596237" exitCode=0 Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.547931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hz75q" event={"ID":"5b7b8eae-da35-4f54-83ec-6343ebedecfa","Type":"ContainerDied","Data":"c4bdf3c737d0bd9f7a539259286fff59738b25a59fcb2257d009f32579596237"} Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.550456 4813 generic.go:334] "Generic (PLEG): container finished" podID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerID="a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989" exitCode=0 Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.550513 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.550518 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0427545d-5ab6-45fd-9d6a-ec1614b54c2c","Type":"ContainerDied","Data":"a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989"} Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.550545 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0427545d-5ab6-45fd-9d6a-ec1614b54c2c","Type":"ContainerDied","Data":"064ea4504f30760ee120f7f3994689ceaac817ef01a6e8c8bd3c5d72e5bb44af"} Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.550565 4813 scope.go:117] "RemoveContainer" containerID="0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.552006 4813 generic.go:334] "Generic (PLEG): container finished" podID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" containerID="bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da" exitCode=0 Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.552046 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" event={"ID":"eff4d2ad-314e-429f-a35d-fc47c9ca89a2","Type":"ContainerDied","Data":"bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da"} Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.556881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c87584f6d-77bdt" event={"ID":"368ae054-4a5a-4187-a078-e1db70e84741","Type":"ContainerStarted","Data":"dc9c9ef7925c57c5b5671e409ccc460c4a293e3fbf7b33465f6a2afb1d10db9f"} Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.556919 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.556929 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c87584f6d-77bdt" event={"ID":"368ae054-4a5a-4187-a078-e1db70e84741","Type":"ContainerStarted","Data":"0608141ae8fe5118f289835c3b1426426dd7de32fa5c6aa0d6155a04e2ea5cb0"} Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.556938 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c87584f6d-77bdt" event={"ID":"368ae054-4a5a-4187-a078-e1db70e84741","Type":"ContainerStarted","Data":"3efc5168f0105927cdf37db0557a13951e5c5596f09e6b276c12fa833f387c77"} Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.593392 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c87584f6d-77bdt" podStartSLOduration=2.593370004 podStartE2EDuration="2.593370004s" podCreationTimestamp="2025-12-02 10:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:53.581317423 +0000 UTC m=+1677.776491755" watchObservedRunningTime="2025-12-02 10:35:53.593370004 +0000 UTC m=+1677.788544306" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.702546 4813 scope.go:117] "RemoveContainer" containerID="d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.719679 4813 scope.go:117] "RemoveContainer" containerID="a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.751605 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.760330 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.767412 4813 scope.go:117] "RemoveContainer" containerID="0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02" Dec 02 10:35:53 crc kubenswrapper[4813]: E1202 10:35:53.770537 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02\": container with ID starting with 0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02 not found: ID does not exist" containerID="0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.770581 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02"} err="failed to get container status \"0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02\": rpc error: code = NotFound desc = could not find container \"0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02\": container with ID starting with 0c3bbc27abd7fb695b6e29335a2f330f2bd5e97ae2789aee2fa88662fcc87c02 not found: ID does not exist" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.770605 4813 scope.go:117] "RemoveContainer" containerID="d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331" Dec 02 10:35:53 crc kubenswrapper[4813]: E1202 10:35:53.770870 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331\": container with ID starting with d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331 not found: ID does not exist" containerID="d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.770890 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331"} err="failed to get container status \"d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331\": rpc error: code = NotFound desc = could not find container \"d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331\": container with ID starting with d35d6486e92338ddc1cfb376a2e71a284232ab5b26f00bc7ff6f4886b755f331 not found: ID does not exist" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.770905 4813 scope.go:117] "RemoveContainer" containerID="a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989" Dec 02 10:35:53 crc kubenswrapper[4813]: E1202 10:35:53.771127 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989\": container with ID starting with a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989 not found: ID does not exist" containerID="a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.771147 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989"} err="failed to get container status \"a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989\": rpc error: code = NotFound desc = could not find container \"a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989\": container with ID starting with a8e61f8cfd7829db9c78c5747a1451c9df733831705b744bd7dc46f1fb8ae989 not found: ID does not exist" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.793555 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:53 crc kubenswrapper[4813]: E1202 10:35:53.793998 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="ceilometer-notification-agent" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.794020 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="ceilometer-notification-agent" Dec 02 10:35:53 crc kubenswrapper[4813]: E1202 10:35:53.794041 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="proxy-httpd" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.794050 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="proxy-httpd" Dec 02 10:35:53 crc kubenswrapper[4813]: E1202 10:35:53.794065 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="sg-core" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.794090 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="sg-core" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.794286 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="ceilometer-notification-agent" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.794298 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="proxy-httpd" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.794308 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" containerName="sg-core" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.796164 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.799756 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.800234 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.805474 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.966417 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.966492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.966527 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.966572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-scripts\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.966635 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.966673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vmm4\" (UniqueName: \"kubernetes.io/projected/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-kube-api-access-2vmm4\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:53 crc kubenswrapper[4813]: I1202 10:35:53.966736 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-config-data\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.067890 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-config-data\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.068032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.068102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.068142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.068222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-scripts\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.068326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.068381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vmm4\" (UniqueName: \"kubernetes.io/projected/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-kube-api-access-2vmm4\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.068708 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.068988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.073521 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.074097 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-config-data\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.085408 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vmm4\" (UniqueName: \"kubernetes.io/projected/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-kube-api-access-2vmm4\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.092866 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-scripts\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.093219 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.111122 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0427545d-5ab6-45fd-9d6a-ec1614b54c2c" path="/var/lib/kubelet/pods/0427545d-5ab6-45fd-9d6a-ec1614b54c2c/volumes" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.129474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.398214 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5764b7874f-mhh86"] Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.400290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.409519 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.409830 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.419299 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5764b7874f-mhh86"] Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.572893 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" event={"ID":"eff4d2ad-314e-429f-a35d-fc47c9ca89a2","Type":"ContainerStarted","Data":"5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b"} Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.572944 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.577030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-config\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.577128 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2fp8\" (UniqueName: \"kubernetes.io/projected/6edf0036-9e60-413c-9a23-38a0c0e95a84-kube-api-access-j2fp8\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.577174 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-combined-ca-bundle\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.577242 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-httpd-config\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.577275 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-internal-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.577330 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-public-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.577359 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-ovndb-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.583700 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.605349 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" podStartSLOduration=3.605323969 podStartE2EDuration="3.605323969s" podCreationTimestamp="2025-12-02 10:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:54.596226341 +0000 UTC m=+1678.791400663" watchObservedRunningTime="2025-12-02 10:35:54.605323969 +0000 UTC m=+1678.800498271" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.678425 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-httpd-config\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.678475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-internal-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.678552 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-public-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.678583 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-ovndb-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.678624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-config\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.678681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2fp8\" (UniqueName: \"kubernetes.io/projected/6edf0036-9e60-413c-9a23-38a0c0e95a84-kube-api-access-j2fp8\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.678723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-combined-ca-bundle\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.685863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-httpd-config\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.686456 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-internal-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.689375 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-config\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.699942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-ovndb-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.703404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-public-tls-certs\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.704308 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edf0036-9e60-413c-9a23-38a0c0e95a84-combined-ca-bundle\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.708432 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2fp8\" (UniqueName: \"kubernetes.io/projected/6edf0036-9e60-413c-9a23-38a0c0e95a84-kube-api-access-j2fp8\") pod \"neutron-5764b7874f-mhh86\" (UID: \"6edf0036-9e60-413c-9a23-38a0c0e95a84\") " pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.745733 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.799417 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.884369 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-combined-ca-bundle\") pod \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.884456 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-db-sync-config-data\") pod \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.884584 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc7vr\" (UniqueName: \"kubernetes.io/projected/5b7b8eae-da35-4f54-83ec-6343ebedecfa-kube-api-access-pc7vr\") pod \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\" (UID: \"5b7b8eae-da35-4f54-83ec-6343ebedecfa\") " Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.888778 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7b8eae-da35-4f54-83ec-6343ebedecfa-kube-api-access-pc7vr" (OuterVolumeSpecName: "kube-api-access-pc7vr") pod "5b7b8eae-da35-4f54-83ec-6343ebedecfa" (UID: "5b7b8eae-da35-4f54-83ec-6343ebedecfa"). InnerVolumeSpecName "kube-api-access-pc7vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.888955 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5b7b8eae-da35-4f54-83ec-6343ebedecfa" (UID: "5b7b8eae-da35-4f54-83ec-6343ebedecfa"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.915327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b7b8eae-da35-4f54-83ec-6343ebedecfa" (UID: "5b7b8eae-da35-4f54-83ec-6343ebedecfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.986675 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc7vr\" (UniqueName: \"kubernetes.io/projected/5b7b8eae-da35-4f54-83ec-6343ebedecfa-kube-api-access-pc7vr\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.986720 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:54 crc kubenswrapper[4813]: I1202 10:35:54.986733 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b7b8eae-da35-4f54-83ec-6343ebedecfa-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:55 crc kubenswrapper[4813]: I1202 10:35:55.263535 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5764b7874f-mhh86"] Dec 02 10:35:55 crc kubenswrapper[4813]: W1202 10:35:55.267777 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6edf0036_9e60_413c_9a23_38a0c0e95a84.slice/crio-79f61155bf1cf0246528ea6df59e2f794159d6b64fe7ec6400b5c363bf5b5087 WatchSource:0}: Error finding container 79f61155bf1cf0246528ea6df59e2f794159d6b64fe7ec6400b5c363bf5b5087: Status 404 returned error can't find the container with id 79f61155bf1cf0246528ea6df59e2f794159d6b64fe7ec6400b5c363bf5b5087 Dec 02 10:35:55 crc kubenswrapper[4813]: I1202 10:35:55.581258 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerStarted","Data":"8cb586dc5b9596d7c7a44fdf9478a1656f320b5ab5eefa5b5adc22ef6519f8b8"} Dec 02 10:35:55 crc kubenswrapper[4813]: I1202 10:35:55.582991 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5764b7874f-mhh86" event={"ID":"6edf0036-9e60-413c-9a23-38a0c0e95a84","Type":"ContainerStarted","Data":"b685f17a428d3471fc518593cb2c26dbddf02ec8ac0076a1ff669d5e090bef34"} Dec 02 10:35:55 crc kubenswrapper[4813]: I1202 10:35:55.583013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5764b7874f-mhh86" event={"ID":"6edf0036-9e60-413c-9a23-38a0c0e95a84","Type":"ContainerStarted","Data":"79f61155bf1cf0246528ea6df59e2f794159d6b64fe7ec6400b5c363bf5b5087"} Dec 02 10:35:55 crc kubenswrapper[4813]: I1202 10:35:55.584765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hz75q" event={"ID":"5b7b8eae-da35-4f54-83ec-6343ebedecfa","Type":"ContainerDied","Data":"f33246d80a30d1bfcecc66ccc70700ef2112a973d9cb354673a4aa14444cd7ff"} Dec 02 10:35:55 crc kubenswrapper[4813]: I1202 10:35:55.584791 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f33246d80a30d1bfcecc66ccc70700ef2112a973d9cb354673a4aa14444cd7ff" Dec 02 10:35:55 crc kubenswrapper[4813]: I1202 10:35:55.584830 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hz75q" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.045826 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-66d478b798-8tjdd"] Dec 02 10:35:56 crc kubenswrapper[4813]: E1202 10:35:56.046465 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7b8eae-da35-4f54-83ec-6343ebedecfa" containerName="barbican-db-sync" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.046482 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7b8eae-da35-4f54-83ec-6343ebedecfa" containerName="barbican-db-sync" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.046627 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7b8eae-da35-4f54-83ec-6343ebedecfa" containerName="barbican-db-sync" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.047698 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.052497 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.052837 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.053008 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jpdgh" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.136867 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fbc57bb6c-lbqlm"] Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.146697 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.152407 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.156346 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-66d478b798-8tjdd"] Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.187523 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fbc57bb6c-lbqlm"] Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.254849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d75001-77f3-47d8-822f-c2b72f0d9226-logs\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.254900 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-combined-ca-bundle\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.254964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpdbs\" (UniqueName: \"kubernetes.io/projected/c66503cc-41b9-44f1-8f42-f65908004aef-kube-api-access-fpdbs\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.254994 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-config-data-custom\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.255014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-config-data\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.255039 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-config-data\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.255135 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnr4t\" (UniqueName: \"kubernetes.io/projected/71d75001-77f3-47d8-822f-c2b72f0d9226-kube-api-access-gnr4t\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.255160 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c66503cc-41b9-44f1-8f42-f65908004aef-logs\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.255212 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-config-data-custom\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.255238 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-combined-ca-bundle\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.297132 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-6hgch"] Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.306871 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-xjltw"] Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.308380 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.327248 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-xjltw"] Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.337634 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5597b77cc8-ptxss"] Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.339153 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.342602 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357000 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnr4t\" (UniqueName: \"kubernetes.io/projected/71d75001-77f3-47d8-822f-c2b72f0d9226-kube-api-access-gnr4t\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c66503cc-41b9-44f1-8f42-f65908004aef-logs\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-config-data-custom\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-combined-ca-bundle\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357452 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d75001-77f3-47d8-822f-c2b72f0d9226-logs\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357470 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-combined-ca-bundle\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357507 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpdbs\" (UniqueName: \"kubernetes.io/projected/c66503cc-41b9-44f1-8f42-f65908004aef-kube-api-access-fpdbs\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357528 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-config-data-custom\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357548 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-config-data\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.357565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-config-data\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.362364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-combined-ca-bundle\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.362592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-config-data-custom\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.363230 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-config-data\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.364141 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71d75001-77f3-47d8-822f-c2b72f0d9226-logs\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.364651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c66503cc-41b9-44f1-8f42-f65908004aef-logs\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.366436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d75001-77f3-47d8-822f-c2b72f0d9226-combined-ca-bundle\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.366513 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5597b77cc8-ptxss"] Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.375944 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-config-data-custom\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.380109 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpdbs\" (UniqueName: \"kubernetes.io/projected/c66503cc-41b9-44f1-8f42-f65908004aef-kube-api-access-fpdbs\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.380305 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnr4t\" (UniqueName: \"kubernetes.io/projected/71d75001-77f3-47d8-822f-c2b72f0d9226-kube-api-access-gnr4t\") pod \"barbican-worker-5fbc57bb6c-lbqlm\" (UID: \"71d75001-77f3-47d8-822f-c2b72f0d9226\") " pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.380659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66503cc-41b9-44f1-8f42-f65908004aef-config-data\") pod \"barbican-keystone-listener-66d478b798-8tjdd\" (UID: \"c66503cc-41b9-44f1-8f42-f65908004aef\") " pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.459608 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data-custom\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460266 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-dns-svc\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460290 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsw2d\" (UniqueName: \"kubernetes.io/projected/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-kube-api-access-dsw2d\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460445 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-config\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460553 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-combined-ca-bundle\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460606 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-logs\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5r8f\" (UniqueName: \"kubernetes.io/projected/304c8d18-c208-459c-af1c-0b97e1de56b6-kube-api-access-d5r8f\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.460771 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.520563 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562237 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-dns-svc\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsw2d\" (UniqueName: \"kubernetes.io/projected/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-kube-api-access-dsw2d\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-config\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562423 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-combined-ca-bundle\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-logs\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5r8f\" (UniqueName: \"kubernetes.io/projected/304c8d18-c208-459c-af1c-0b97e1de56b6-kube-api-access-d5r8f\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562583 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.562628 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data-custom\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.563746 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-dns-svc\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.564533 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-config\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.564593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.564666 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-logs\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.565830 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.566218 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-combined-ca-bundle\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.570142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data-custom\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.581988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.583535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5r8f\" (UniqueName: \"kubernetes.io/projected/304c8d18-c208-459c-af1c-0b97e1de56b6-kube-api-access-d5r8f\") pod \"dnsmasq-dns-6bb684768f-xjltw\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.585412 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsw2d\" (UniqueName: \"kubernetes.io/projected/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-kube-api-access-dsw2d\") pod \"barbican-api-5597b77cc8-ptxss\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.594687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5764b7874f-mhh86" event={"ID":"6edf0036-9e60-413c-9a23-38a0c0e95a84","Type":"ContainerStarted","Data":"8b2c1125b3a85557f808ef696921c0aa2bc8105e95a02a69805e6870afb498e3"} Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.595284 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.597277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerStarted","Data":"f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779"} Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.603366 4813 generic.go:334] "Generic (PLEG): container finished" podID="aeebb6e7-c26e-421b-ab9c-4b75379601bf" containerID="1e9d4436ab510bb1c15e062d7d96769e04a47894d121b5ae8dae6533050f3016" exitCode=0 Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.603452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2gtvp" event={"ID":"aeebb6e7-c26e-421b-ab9c-4b75379601bf","Type":"ContainerDied","Data":"1e9d4436ab510bb1c15e062d7d96769e04a47894d121b5ae8dae6533050f3016"} Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.603658 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" podUID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" containerName="dnsmasq-dns" containerID="cri-o://5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b" gracePeriod=10 Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.616747 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5764b7874f-mhh86" podStartSLOduration=2.6167160320000002 podStartE2EDuration="2.616716032s" podCreationTimestamp="2025-12-02 10:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:56.612493763 +0000 UTC m=+1680.807668065" watchObservedRunningTime="2025-12-02 10:35:56.616716032 +0000 UTC m=+1680.811890334" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.675676 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.776556 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:35:56 crc kubenswrapper[4813]: I1202 10:35:56.800739 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.021303 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fbc57bb6c-lbqlm"] Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.376252 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.434731 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-xjltw"] Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.455538 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-66d478b798-8tjdd"] Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.490184 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-dns-svc\") pod \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.490479 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vfqc\" (UniqueName: \"kubernetes.io/projected/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-kube-api-access-4vfqc\") pod \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.490576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-nb\") pod \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.490684 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-config\") pod \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.490814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-sb\") pod \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\" (UID: \"eff4d2ad-314e-429f-a35d-fc47c9ca89a2\") " Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.497039 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-kube-api-access-4vfqc" (OuterVolumeSpecName: "kube-api-access-4vfqc") pod "eff4d2ad-314e-429f-a35d-fc47c9ca89a2" (UID: "eff4d2ad-314e-429f-a35d-fc47c9ca89a2"). InnerVolumeSpecName "kube-api-access-4vfqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.535602 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eff4d2ad-314e-429f-a35d-fc47c9ca89a2" (UID: "eff4d2ad-314e-429f-a35d-fc47c9ca89a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.541979 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eff4d2ad-314e-429f-a35d-fc47c9ca89a2" (UID: "eff4d2ad-314e-429f-a35d-fc47c9ca89a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.548677 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eff4d2ad-314e-429f-a35d-fc47c9ca89a2" (UID: "eff4d2ad-314e-429f-a35d-fc47c9ca89a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.562686 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-config" (OuterVolumeSpecName: "config") pod "eff4d2ad-314e-429f-a35d-fc47c9ca89a2" (UID: "eff4d2ad-314e-429f-a35d-fc47c9ca89a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.571336 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5597b77cc8-ptxss"] Dec 02 10:35:57 crc kubenswrapper[4813]: W1202 10:35:57.574839 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d9b57a_f4b2_4c1d_b85a_9bb65c1ce7b5.slice/crio-50782d593a5c6369559b5b8146f8987599bf4c38a2705cb8b850b85bcbdab7e7 WatchSource:0}: Error finding container 50782d593a5c6369559b5b8146f8987599bf4c38a2705cb8b850b85bcbdab7e7: Status 404 returned error can't find the container with id 50782d593a5c6369559b5b8146f8987599bf4c38a2705cb8b850b85bcbdab7e7 Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.595087 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.595976 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vfqc\" (UniqueName: \"kubernetes.io/projected/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-kube-api-access-4vfqc\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.595992 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.596003 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.596011 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eff4d2ad-314e-429f-a35d-fc47c9ca89a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.612764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" event={"ID":"c66503cc-41b9-44f1-8f42-f65908004aef","Type":"ContainerStarted","Data":"18d3541ccc01e0b82463e27789e7aa9fbd141e9209dd8895f8b854f7a77fafca"} Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.613982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" event={"ID":"71d75001-77f3-47d8-822f-c2b72f0d9226","Type":"ContainerStarted","Data":"6ffa1413a50b0313657abaf4a47dcc57c8ccffe01aef86bfcc774683540c1a24"} Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.616153 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" event={"ID":"304c8d18-c208-459c-af1c-0b97e1de56b6","Type":"ContainerStarted","Data":"6399e3e106eec59f00d71d81dde9d70db03d71064792af7a32f66be3919479fc"} Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.618531 4813 generic.go:334] "Generic (PLEG): container finished" podID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" containerID="5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b" exitCode=0 Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.618625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" event={"ID":"eff4d2ad-314e-429f-a35d-fc47c9ca89a2","Type":"ContainerDied","Data":"5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b"} Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.618663 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" event={"ID":"eff4d2ad-314e-429f-a35d-fc47c9ca89a2","Type":"ContainerDied","Data":"01b2329c9e78b774136a1e0383eafd60f48e6f4ffd0d9b6d0609bf90a48be383"} Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.618685 4813 scope.go:117] "RemoveContainer" containerID="5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.618626 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-6hgch" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.626856 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5597b77cc8-ptxss" event={"ID":"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5","Type":"ContainerStarted","Data":"50782d593a5c6369559b5b8146f8987599bf4c38a2705cb8b850b85bcbdab7e7"} Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.635653 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerStarted","Data":"eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6"} Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.656781 4813 scope.go:117] "RemoveContainer" containerID="bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.683974 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-6hgch"] Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.688783 4813 scope.go:117] "RemoveContainer" containerID="5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b" Dec 02 10:35:57 crc kubenswrapper[4813]: E1202 10:35:57.689326 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b\": container with ID starting with 5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b not found: ID does not exist" containerID="5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.689368 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b"} err="failed to get container status \"5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b\": rpc error: code = NotFound desc = could not find container \"5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b\": container with ID starting with 5cc6a12c8bd0909de8833d925aa8117ba567a9c194f22dc8fecb03253edf9f8b not found: ID does not exist" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.689400 4813 scope.go:117] "RemoveContainer" containerID="bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da" Dec 02 10:35:57 crc kubenswrapper[4813]: E1202 10:35:57.689685 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da\": container with ID starting with bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da not found: ID does not exist" containerID="bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.689717 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da"} err="failed to get container status \"bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da\": rpc error: code = NotFound desc = could not find container \"bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da\": container with ID starting with bf2403ab3ed43fb62ee4c8b94d7775b6a8ee1c1c9aeadeb62650dab66e7178da not found: ID does not exist" Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.693577 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-6hgch"] Dec 02 10:35:57 crc kubenswrapper[4813]: I1202 10:35:57.907999 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.003523 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwmbj\" (UniqueName: \"kubernetes.io/projected/aeebb6e7-c26e-421b-ab9c-4b75379601bf-kube-api-access-xwmbj\") pod \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.003586 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-combined-ca-bundle\") pod \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.003684 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-scripts\") pod \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.003724 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-config-data\") pod \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.003757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aeebb6e7-c26e-421b-ab9c-4b75379601bf-etc-machine-id\") pod \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.003863 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-db-sync-config-data\") pod \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\" (UID: \"aeebb6e7-c26e-421b-ab9c-4b75379601bf\") " Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.005133 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aeebb6e7-c26e-421b-ab9c-4b75379601bf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aeebb6e7-c26e-421b-ab9c-4b75379601bf" (UID: "aeebb6e7-c26e-421b-ab9c-4b75379601bf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.009818 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-scripts" (OuterVolumeSpecName: "scripts") pod "aeebb6e7-c26e-421b-ab9c-4b75379601bf" (UID: "aeebb6e7-c26e-421b-ab9c-4b75379601bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.014558 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "aeebb6e7-c26e-421b-ab9c-4b75379601bf" (UID: "aeebb6e7-c26e-421b-ab9c-4b75379601bf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.032018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeebb6e7-c26e-421b-ab9c-4b75379601bf-kube-api-access-xwmbj" (OuterVolumeSpecName: "kube-api-access-xwmbj") pod "aeebb6e7-c26e-421b-ab9c-4b75379601bf" (UID: "aeebb6e7-c26e-421b-ab9c-4b75379601bf"). InnerVolumeSpecName "kube-api-access-xwmbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.061439 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aeebb6e7-c26e-421b-ab9c-4b75379601bf" (UID: "aeebb6e7-c26e-421b-ab9c-4b75379601bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.079929 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-config-data" (OuterVolumeSpecName: "config-data") pod "aeebb6e7-c26e-421b-ab9c-4b75379601bf" (UID: "aeebb6e7-c26e-421b-ab9c-4b75379601bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.094905 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" path="/var/lib/kubelet/pods/eff4d2ad-314e-429f-a35d-fc47c9ca89a2/volumes" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.105803 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwmbj\" (UniqueName: \"kubernetes.io/projected/aeebb6e7-c26e-421b-ab9c-4b75379601bf-kube-api-access-xwmbj\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.105857 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.105874 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.105886 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.105898 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aeebb6e7-c26e-421b-ab9c-4b75379601bf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.105908 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aeebb6e7-c26e-421b-ab9c-4b75379601bf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.655691 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2gtvp" event={"ID":"aeebb6e7-c26e-421b-ab9c-4b75379601bf","Type":"ContainerDied","Data":"7817fd4ba92417cfbdce30f8455b498201bc2713f02a089162974c4d65a763cd"} Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.656049 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7817fd4ba92417cfbdce30f8455b498201bc2713f02a089162974c4d65a763cd" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.655718 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2gtvp" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.661225 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5597b77cc8-ptxss" event={"ID":"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5","Type":"ContainerStarted","Data":"d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d"} Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.661266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5597b77cc8-ptxss" event={"ID":"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5","Type":"ContainerStarted","Data":"021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb"} Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.661828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.661872 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.669372 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerStarted","Data":"2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29"} Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.685865 4813 generic.go:334] "Generic (PLEG): container finished" podID="304c8d18-c208-459c-af1c-0b97e1de56b6" containerID="c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87" exitCode=0 Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.685904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" event={"ID":"304c8d18-c208-459c-af1c-0b97e1de56b6","Type":"ContainerDied","Data":"c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87"} Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.710495 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5597b77cc8-ptxss" podStartSLOduration=2.710478301 podStartE2EDuration="2.710478301s" podCreationTimestamp="2025-12-02 10:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:35:58.705716546 +0000 UTC m=+1682.900890848" watchObservedRunningTime="2025-12-02 10:35:58.710478301 +0000 UTC m=+1682.905652603" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.808904 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7cbdc6498b-rbl7g"] Dec 02 10:35:58 crc kubenswrapper[4813]: E1202 10:35:58.809362 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" containerName="dnsmasq-dns" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.809379 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" containerName="dnsmasq-dns" Dec 02 10:35:58 crc kubenswrapper[4813]: E1202 10:35:58.809408 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" containerName="init" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.809416 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" containerName="init" Dec 02 10:35:58 crc kubenswrapper[4813]: E1202 10:35:58.809426 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeebb6e7-c26e-421b-ab9c-4b75379601bf" containerName="cinder-db-sync" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.809434 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeebb6e7-c26e-421b-ab9c-4b75379601bf" containerName="cinder-db-sync" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.809641 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff4d2ad-314e-429f-a35d-fc47c9ca89a2" containerName="dnsmasq-dns" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.809664 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeebb6e7-c26e-421b-ab9c-4b75379601bf" containerName="cinder-db-sync" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.840832 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cbdc6498b-rbl7g"] Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.841315 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.843978 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.844942 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.924453 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.926396 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.930367 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-config-data-custom\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.930428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kbq4\" (UniqueName: \"kubernetes.io/projected/03b395a1-3aae-4528-9d4c-3d4dc0413de4-kube-api-access-4kbq4\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.930484 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-config-data\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.930509 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-public-tls-certs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.930545 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-combined-ca-bundle\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.930570 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b395a1-3aae-4528-9d4c-3d4dc0413de4-logs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.930631 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-internal-tls-certs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.933281 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.933510 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.933722 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wj4pr" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.934331 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.934659 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:35:58 crc kubenswrapper[4813]: I1202 10:35:58.984460 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-xjltw"] Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.024841 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-khmf6"] Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.026497 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.031863 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.031925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.031972 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-internal-tls-certs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032112 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-config-data-custom\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032143 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlffc\" (UniqueName: \"kubernetes.io/projected/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-kube-api-access-tlffc\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032187 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kbq4\" (UniqueName: \"kubernetes.io/projected/03b395a1-3aae-4528-9d4c-3d4dc0413de4-kube-api-access-4kbq4\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032229 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032258 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032284 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-config-data\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-public-tls-certs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032340 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-combined-ca-bundle\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032363 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b395a1-3aae-4528-9d4c-3d4dc0413de4-logs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.032897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b395a1-3aae-4528-9d4c-3d4dc0413de4-logs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.049803 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-khmf6"] Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.059953 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-public-tls-certs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.076439 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-combined-ca-bundle\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.090989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-internal-tls-certs\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.099680 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-config-data-custom\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.101420 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b395a1-3aae-4528-9d4c-3d4dc0413de4-config-data\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.113987 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kbq4\" (UniqueName: \"kubernetes.io/projected/03b395a1-3aae-4528-9d4c-3d4dc0413de4-kube-api-access-4kbq4\") pod \"barbican-api-7cbdc6498b-rbl7g\" (UID: \"03b395a1-3aae-4528-9d4c-3d4dc0413de4\") " pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.125583 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.127886 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150014 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlffc\" (UniqueName: \"kubernetes.io/projected/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-kube-api-access-tlffc\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150259 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150314 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgkdk\" (UniqueName: \"kubernetes.io/projected/df3c50a8-d38e-4243-9c68-3c8713072e3e-kube-api-access-qgkdk\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150515 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-config\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150541 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.150572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.156482 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.157921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.158554 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.173404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.173858 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.179569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.182138 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlffc\" (UniqueName: \"kubernetes.io/projected/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-kube-api-access-tlffc\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.196141 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.204556 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.251661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.251717 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgkdk\" (UniqueName: \"kubernetes.io/projected/df3c50a8-d38e-4243-9c68-3c8713072e3e-kube-api-access-qgkdk\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.251765 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-scripts\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.251791 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.251859 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldhv\" (UniqueName: \"kubernetes.io/projected/08967e15-8ca5-4050-837b-73450acf920e-kube-api-access-7ldhv\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.251907 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-config\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.251935 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.251965 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.252009 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08967e15-8ca5-4050-837b-73450acf920e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.252047 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.252108 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data-custom\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.252136 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08967e15-8ca5-4050-837b-73450acf920e-logs\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.253059 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.253700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.253830 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.254090 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-config\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.260021 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.291956 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgkdk\" (UniqueName: \"kubernetes.io/projected/df3c50a8-d38e-4243-9c68-3c8713072e3e-kube-api-access-qgkdk\") pod \"dnsmasq-dns-6d97fcdd8f-khmf6\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.353822 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldhv\" (UniqueName: \"kubernetes.io/projected/08967e15-8ca5-4050-837b-73450acf920e-kube-api-access-7ldhv\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.354331 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08967e15-8ca5-4050-837b-73450acf920e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.354420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.354493 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data-custom\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.354565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08967e15-8ca5-4050-837b-73450acf920e-logs\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.354674 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-scripts\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.354736 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.355884 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08967e15-8ca5-4050-837b-73450acf920e-logs\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.355950 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08967e15-8ca5-4050-837b-73450acf920e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.359348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-scripts\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.360780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.362759 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data-custom\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.366795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.381840 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldhv\" (UniqueName: \"kubernetes.io/projected/08967e15-8ca5-4050-837b-73450acf920e-kube-api-access-7ldhv\") pod \"cinder-api-0\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " pod="openstack/cinder-api-0" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.457950 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:35:59 crc kubenswrapper[4813]: I1202 10:35:59.551555 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:36:01 crc kubenswrapper[4813]: I1202 10:36:01.942474 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cbdc6498b-rbl7g"] Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.014790 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:36:02 crc kubenswrapper[4813]: W1202 10:36:02.024303 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb968b35_a64e_4b5e_a954_9fe4bbd544cc.slice/crio-e915d346e31fa7c768d7b2a4356437c4f3e8df2c992167046142a514c232d97b WatchSource:0}: Error finding container e915d346e31fa7c768d7b2a4356437c4f3e8df2c992167046142a514c232d97b: Status 404 returned error can't find the container with id e915d346e31fa7c768d7b2a4356437c4f3e8df2c992167046142a514c232d97b Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.094823 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-khmf6"] Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.201445 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:36:02 crc kubenswrapper[4813]: W1202 10:36:02.229691 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08967e15_8ca5_4050_837b_73450acf920e.slice/crio-f9be93bee115251ee95879b4bf6b26d2fd9f01b7177e1625e89e878425f2d303 WatchSource:0}: Error finding container f9be93bee115251ee95879b4bf6b26d2fd9f01b7177e1625e89e878425f2d303: Status 404 returned error can't find the container with id f9be93bee115251ee95879b4bf6b26d2fd9f01b7177e1625e89e878425f2d303 Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.507732 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.833955 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerStarted","Data":"8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.835272 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.867719 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08967e15-8ca5-4050-837b-73450acf920e","Type":"ContainerStarted","Data":"f9be93bee115251ee95879b4bf6b26d2fd9f01b7177e1625e89e878425f2d303"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.891742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb968b35-a64e-4b5e-a954-9fe4bbd544cc","Type":"ContainerStarted","Data":"e915d346e31fa7c768d7b2a4356437c4f3e8df2c992167046142a514c232d97b"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.895121 4813 generic.go:334] "Generic (PLEG): container finished" podID="df3c50a8-d38e-4243-9c68-3c8713072e3e" containerID="39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2" exitCode=0 Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.895213 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" event={"ID":"df3c50a8-d38e-4243-9c68-3c8713072e3e","Type":"ContainerDied","Data":"39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.895231 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" event={"ID":"df3c50a8-d38e-4243-9c68-3c8713072e3e","Type":"ContainerStarted","Data":"629c695ee63a8e72b13dbd872a9d0a7e10ddaa1c65689ae4a70a39543e4491ba"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.916997 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" event={"ID":"304c8d18-c208-459c-af1c-0b97e1de56b6","Type":"ContainerStarted","Data":"61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.917197 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" podUID="304c8d18-c208-459c-af1c-0b97e1de56b6" containerName="dnsmasq-dns" containerID="cri-o://61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed" gracePeriod=10 Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.917246 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.941375 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" event={"ID":"c66503cc-41b9-44f1-8f42-f65908004aef","Type":"ContainerStarted","Data":"da4983322b5f0fb133abe5153206b54963082641273faa847e837929ad4d3064"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.941439 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" event={"ID":"c66503cc-41b9-44f1-8f42-f65908004aef","Type":"ContainerStarted","Data":"c8b66404ff7731b59c4d6e35ae7e270e6b6920f949abdbdbc02404ca978daed7"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.960700 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.065933016 podStartE2EDuration="9.960677009s" podCreationTimestamp="2025-12-02 10:35:53 +0000 UTC" firstStartedPulling="2025-12-02 10:35:54.583975164 +0000 UTC m=+1678.779149466" lastFinishedPulling="2025-12-02 10:36:01.478719157 +0000 UTC m=+1685.673893459" observedRunningTime="2025-12-02 10:36:02.931584887 +0000 UTC m=+1687.126759189" watchObservedRunningTime="2025-12-02 10:36:02.960677009 +0000 UTC m=+1687.155851311" Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.984520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" event={"ID":"71d75001-77f3-47d8-822f-c2b72f0d9226","Type":"ContainerStarted","Data":"280e2b33e0eb5d2f2e907e618e30c88576c9ab576f76c74e763ff7a503ee7b95"} Dec 02 10:36:02 crc kubenswrapper[4813]: I1202 10:36:02.984582 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" event={"ID":"71d75001-77f3-47d8-822f-c2b72f0d9226","Type":"ContainerStarted","Data":"9bef8941e3ac3321032494e480a147f658fbbf5014b28fa0d176e1561d82469a"} Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.013549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbdc6498b-rbl7g" event={"ID":"03b395a1-3aae-4528-9d4c-3d4dc0413de4","Type":"ContainerStarted","Data":"431ec3e35eb3778b4aed6a2935f261bf5859ebc4d118c21644ab846b04bce885"} Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.013595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbdc6498b-rbl7g" event={"ID":"03b395a1-3aae-4528-9d4c-3d4dc0413de4","Type":"ContainerStarted","Data":"24a04de30fed3bf2d743a8fc8ed0d498a58b1955c0ad55c63943f37d00a1f8a3"} Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.020506 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" podStartSLOduration=7.020482488 podStartE2EDuration="7.020482488s" podCreationTimestamp="2025-12-02 10:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:02.980951358 +0000 UTC m=+1687.176125660" watchObservedRunningTime="2025-12-02 10:36:03.020482488 +0000 UTC m=+1687.215656790" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.068659 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:36:03 crc kubenswrapper[4813]: E1202 10:36:03.068938 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.072274 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-66d478b798-8tjdd" podStartSLOduration=3.062859805 podStartE2EDuration="7.072257608s" podCreationTimestamp="2025-12-02 10:35:56 +0000 UTC" firstStartedPulling="2025-12-02 10:35:57.461643764 +0000 UTC m=+1681.656818066" lastFinishedPulling="2025-12-02 10:36:01.471041567 +0000 UTC m=+1685.666215869" observedRunningTime="2025-12-02 10:36:03.03977449 +0000 UTC m=+1687.234948802" watchObservedRunningTime="2025-12-02 10:36:03.072257608 +0000 UTC m=+1687.267431910" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.096697 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fbc57bb6c-lbqlm" podStartSLOduration=2.682762383 podStartE2EDuration="7.096680576s" podCreationTimestamp="2025-12-02 10:35:56 +0000 UTC" firstStartedPulling="2025-12-02 10:35:57.049362212 +0000 UTC m=+1681.244536504" lastFinishedPulling="2025-12-02 10:36:01.463280395 +0000 UTC m=+1685.658454697" observedRunningTime="2025-12-02 10:36:03.06148578 +0000 UTC m=+1687.256660082" watchObservedRunningTime="2025-12-02 10:36:03.096680576 +0000 UTC m=+1687.291854878" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.657330 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.762168 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-dns-svc\") pod \"304c8d18-c208-459c-af1c-0b97e1de56b6\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.762287 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-config\") pod \"304c8d18-c208-459c-af1c-0b97e1de56b6\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.762408 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-sb\") pod \"304c8d18-c208-459c-af1c-0b97e1de56b6\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.762439 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-nb\") pod \"304c8d18-c208-459c-af1c-0b97e1de56b6\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.762554 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5r8f\" (UniqueName: \"kubernetes.io/projected/304c8d18-c208-459c-af1c-0b97e1de56b6-kube-api-access-d5r8f\") pod \"304c8d18-c208-459c-af1c-0b97e1de56b6\" (UID: \"304c8d18-c208-459c-af1c-0b97e1de56b6\") " Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.786471 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/304c8d18-c208-459c-af1c-0b97e1de56b6-kube-api-access-d5r8f" (OuterVolumeSpecName: "kube-api-access-d5r8f") pod "304c8d18-c208-459c-af1c-0b97e1de56b6" (UID: "304c8d18-c208-459c-af1c-0b97e1de56b6"). InnerVolumeSpecName "kube-api-access-d5r8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.829732 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-config" (OuterVolumeSpecName: "config") pod "304c8d18-c208-459c-af1c-0b97e1de56b6" (UID: "304c8d18-c208-459c-af1c-0b97e1de56b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.830761 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "304c8d18-c208-459c-af1c-0b97e1de56b6" (UID: "304c8d18-c208-459c-af1c-0b97e1de56b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.835756 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "304c8d18-c208-459c-af1c-0b97e1de56b6" (UID: "304c8d18-c208-459c-af1c-0b97e1de56b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.864654 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.864688 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5r8f\" (UniqueName: \"kubernetes.io/projected/304c8d18-c208-459c-af1c-0b97e1de56b6-kube-api-access-d5r8f\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.864701 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.864709 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.871018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "304c8d18-c208-459c-af1c-0b97e1de56b6" (UID: "304c8d18-c208-459c-af1c-0b97e1de56b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:03 crc kubenswrapper[4813]: I1202 10:36:03.966325 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c8d18-c208-459c-af1c-0b97e1de56b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.033384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cbdc6498b-rbl7g" event={"ID":"03b395a1-3aae-4528-9d4c-3d4dc0413de4","Type":"ContainerStarted","Data":"b37d4a75bb5f770030262e879e342f7418a1b4054ad758e8d7b65b385281a21f"} Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.034515 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.034548 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.041710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" event={"ID":"df3c50a8-d38e-4243-9c68-3c8713072e3e","Type":"ContainerStarted","Data":"70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a"} Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.042442 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.059629 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7cbdc6498b-rbl7g" podStartSLOduration=6.059608502 podStartE2EDuration="6.059608502s" podCreationTimestamp="2025-12-02 10:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:04.050449431 +0000 UTC m=+1688.245623733" watchObservedRunningTime="2025-12-02 10:36:04.059608502 +0000 UTC m=+1688.254782804" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.075750 4813 generic.go:334] "Generic (PLEG): container finished" podID="304c8d18-c208-459c-af1c-0b97e1de56b6" containerID="61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed" exitCode=0 Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.075990 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.082868 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" podStartSLOduration=6.082755404 podStartE2EDuration="6.082755404s" podCreationTimestamp="2025-12-02 10:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:04.078977946 +0000 UTC m=+1688.274152248" watchObservedRunningTime="2025-12-02 10:36:04.082755404 +0000 UTC m=+1688.277929706" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.094690 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" event={"ID":"304c8d18-c208-459c-af1c-0b97e1de56b6","Type":"ContainerDied","Data":"61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed"} Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.094731 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-xjltw" event={"ID":"304c8d18-c208-459c-af1c-0b97e1de56b6","Type":"ContainerDied","Data":"6399e3e106eec59f00d71d81dde9d70db03d71064792af7a32f66be3919479fc"} Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.094742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08967e15-8ca5-4050-837b-73450acf920e","Type":"ContainerStarted","Data":"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494"} Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.094759 4813 scope.go:117] "RemoveContainer" containerID="61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.140248 4813 scope.go:117] "RemoveContainer" containerID="c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.141615 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-xjltw"] Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.149782 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-xjltw"] Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.175359 4813 scope.go:117] "RemoveContainer" containerID="61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed" Dec 02 10:36:04 crc kubenswrapper[4813]: E1202 10:36:04.175943 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed\": container with ID starting with 61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed not found: ID does not exist" containerID="61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.176057 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed"} err="failed to get container status \"61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed\": rpc error: code = NotFound desc = could not find container \"61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed\": container with ID starting with 61a75a4ba0ff6bbbba0a27f971ae18092df00bb01d560cf94dd2120d97ed63ed not found: ID does not exist" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.176179 4813 scope.go:117] "RemoveContainer" containerID="c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87" Dec 02 10:36:04 crc kubenswrapper[4813]: E1202 10:36:04.176572 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87\": container with ID starting with c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87 not found: ID does not exist" containerID="c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87" Dec 02 10:36:04 crc kubenswrapper[4813]: I1202 10:36:04.176665 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87"} err="failed to get container status \"c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87\": rpc error: code = NotFound desc = could not find container \"c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87\": container with ID starting with c34c98f2aa088706dad1e7cd153dfa92d3674f934d922ccd20f98779acde1a87 not found: ID does not exist" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.100844 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb968b35-a64e-4b5e-a954-9fe4bbd544cc","Type":"ContainerStarted","Data":"582e408018f687d67f921ebba2733d116566ea262cdb09a240184a15a27dab1a"} Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.101165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb968b35-a64e-4b5e-a954-9fe4bbd544cc","Type":"ContainerStarted","Data":"3948ad4da0d0301be0c5321fceb982e05f742aa3987377a4e39cd313c7db8184"} Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.105971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08967e15-8ca5-4050-837b-73450acf920e","Type":"ContainerStarted","Data":"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5"} Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.106252 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="08967e15-8ca5-4050-837b-73450acf920e" containerName="cinder-api-log" containerID="cri-o://b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494" gracePeriod=30 Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.106358 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="08967e15-8ca5-4050-837b-73450acf920e" containerName="cinder-api" containerID="cri-o://647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5" gracePeriod=30 Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.156398 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.855388345 podStartE2EDuration="7.156373574s" podCreationTimestamp="2025-12-02 10:35:58 +0000 UTC" firstStartedPulling="2025-12-02 10:36:02.031432806 +0000 UTC m=+1686.226607108" lastFinishedPulling="2025-12-02 10:36:03.332418035 +0000 UTC m=+1687.527592337" observedRunningTime="2025-12-02 10:36:05.127051446 +0000 UTC m=+1689.322225748" watchObservedRunningTime="2025-12-02 10:36:05.156373574 +0000 UTC m=+1689.351547876" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.162885 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.16286875 podStartE2EDuration="6.16286875s" podCreationTimestamp="2025-12-02 10:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:05.149522629 +0000 UTC m=+1689.344696941" watchObservedRunningTime="2025-12-02 10:36:05.16286875 +0000 UTC m=+1689.358043052" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.707025 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.812967 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-combined-ca-bundle\") pod \"08967e15-8ca5-4050-837b-73450acf920e\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.813035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data\") pod \"08967e15-8ca5-4050-837b-73450acf920e\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.813080 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ldhv\" (UniqueName: \"kubernetes.io/projected/08967e15-8ca5-4050-837b-73450acf920e-kube-api-access-7ldhv\") pod \"08967e15-8ca5-4050-837b-73450acf920e\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.813163 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08967e15-8ca5-4050-837b-73450acf920e-etc-machine-id\") pod \"08967e15-8ca5-4050-837b-73450acf920e\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.813187 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08967e15-8ca5-4050-837b-73450acf920e-logs\") pod \"08967e15-8ca5-4050-837b-73450acf920e\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.813287 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-scripts\") pod \"08967e15-8ca5-4050-837b-73450acf920e\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.813336 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data-custom\") pod \"08967e15-8ca5-4050-837b-73450acf920e\" (UID: \"08967e15-8ca5-4050-837b-73450acf920e\") " Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.814513 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08967e15-8ca5-4050-837b-73450acf920e-logs" (OuterVolumeSpecName: "logs") pod "08967e15-8ca5-4050-837b-73450acf920e" (UID: "08967e15-8ca5-4050-837b-73450acf920e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.814940 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08967e15-8ca5-4050-837b-73450acf920e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "08967e15-8ca5-4050-837b-73450acf920e" (UID: "08967e15-8ca5-4050-837b-73450acf920e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.829282 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-scripts" (OuterVolumeSpecName: "scripts") pod "08967e15-8ca5-4050-837b-73450acf920e" (UID: "08967e15-8ca5-4050-837b-73450acf920e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.830409 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08967e15-8ca5-4050-837b-73450acf920e" (UID: "08967e15-8ca5-4050-837b-73450acf920e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.834311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08967e15-8ca5-4050-837b-73450acf920e-kube-api-access-7ldhv" (OuterVolumeSpecName: "kube-api-access-7ldhv") pod "08967e15-8ca5-4050-837b-73450acf920e" (UID: "08967e15-8ca5-4050-837b-73450acf920e"). InnerVolumeSpecName "kube-api-access-7ldhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.856772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08967e15-8ca5-4050-837b-73450acf920e" (UID: "08967e15-8ca5-4050-837b-73450acf920e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.867638 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data" (OuterVolumeSpecName: "config-data") pod "08967e15-8ca5-4050-837b-73450acf920e" (UID: "08967e15-8ca5-4050-837b-73450acf920e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.914881 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.914916 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.914928 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.914936 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08967e15-8ca5-4050-837b-73450acf920e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.914944 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ldhv\" (UniqueName: \"kubernetes.io/projected/08967e15-8ca5-4050-837b-73450acf920e-kube-api-access-7ldhv\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.914953 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08967e15-8ca5-4050-837b-73450acf920e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:05 crc kubenswrapper[4813]: I1202 10:36:05.914961 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08967e15-8ca5-4050-837b-73450acf920e-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.085932 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="304c8d18-c208-459c-af1c-0b97e1de56b6" path="/var/lib/kubelet/pods/304c8d18-c208-459c-af1c-0b97e1de56b6/volumes" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.116502 4813 generic.go:334] "Generic (PLEG): container finished" podID="08967e15-8ca5-4050-837b-73450acf920e" containerID="647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5" exitCode=0 Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.116541 4813 generic.go:334] "Generic (PLEG): container finished" podID="08967e15-8ca5-4050-837b-73450acf920e" containerID="b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494" exitCode=143 Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.116573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08967e15-8ca5-4050-837b-73450acf920e","Type":"ContainerDied","Data":"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5"} Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.116621 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08967e15-8ca5-4050-837b-73450acf920e","Type":"ContainerDied","Data":"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494"} Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.116636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"08967e15-8ca5-4050-837b-73450acf920e","Type":"ContainerDied","Data":"f9be93bee115251ee95879b4bf6b26d2fd9f01b7177e1625e89e878425f2d303"} Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.116656 4813 scope.go:117] "RemoveContainer" containerID="647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.117759 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.155932 4813 scope.go:117] "RemoveContainer" containerID="b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.158414 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.172047 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.180234 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:36:06 crc kubenswrapper[4813]: E1202 10:36:06.180592 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="304c8d18-c208-459c-af1c-0b97e1de56b6" containerName="init" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.180612 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="304c8d18-c208-459c-af1c-0b97e1de56b6" containerName="init" Dec 02 10:36:06 crc kubenswrapper[4813]: E1202 10:36:06.180627 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08967e15-8ca5-4050-837b-73450acf920e" containerName="cinder-api-log" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.180634 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="08967e15-8ca5-4050-837b-73450acf920e" containerName="cinder-api-log" Dec 02 10:36:06 crc kubenswrapper[4813]: E1202 10:36:06.180651 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08967e15-8ca5-4050-837b-73450acf920e" containerName="cinder-api" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.180657 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="08967e15-8ca5-4050-837b-73450acf920e" containerName="cinder-api" Dec 02 10:36:06 crc kubenswrapper[4813]: E1202 10:36:06.180667 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="304c8d18-c208-459c-af1c-0b97e1de56b6" containerName="dnsmasq-dns" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.180672 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="304c8d18-c208-459c-af1c-0b97e1de56b6" containerName="dnsmasq-dns" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.180851 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="08967e15-8ca5-4050-837b-73450acf920e" containerName="cinder-api-log" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.180872 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="08967e15-8ca5-4050-837b-73450acf920e" containerName="cinder-api" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.180885 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="304c8d18-c208-459c-af1c-0b97e1de56b6" containerName="dnsmasq-dns" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.182031 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.197629 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.198184 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.198326 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.198639 4813 scope.go:117] "RemoveContainer" containerID="647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5" Dec 02 10:36:06 crc kubenswrapper[4813]: E1202 10:36:06.199809 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5\": container with ID starting with 647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5 not found: ID does not exist" containerID="647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.199848 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5"} err="failed to get container status \"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5\": rpc error: code = NotFound desc = could not find container \"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5\": container with ID starting with 647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5 not found: ID does not exist" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.199872 4813 scope.go:117] "RemoveContainer" containerID="b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494" Dec 02 10:36:06 crc kubenswrapper[4813]: E1202 10:36:06.200716 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494\": container with ID starting with b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494 not found: ID does not exist" containerID="b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.200739 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494"} err="failed to get container status \"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494\": rpc error: code = NotFound desc = could not find container \"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494\": container with ID starting with b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494 not found: ID does not exist" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.200752 4813 scope.go:117] "RemoveContainer" containerID="647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.201171 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5"} err="failed to get container status \"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5\": rpc error: code = NotFound desc = could not find container \"647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5\": container with ID starting with 647112f5520deee3844c15257fac837c8808367d2abb036d45c8fc9232eccaa5 not found: ID does not exist" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.201196 4813 scope.go:117] "RemoveContainer" containerID="b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.204430 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494"} err="failed to get container status \"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494\": rpc error: code = NotFound desc = could not find container \"b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494\": container with ID starting with b8ebb283c7d7b4361f0c92805007c45c23241e3074bbbe1fc14940b8d4349494 not found: ID does not exist" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.234396 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.329174 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.329606 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-config-data\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.329725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.329843 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vdk\" (UniqueName: \"kubernetes.io/projected/84d8089e-8fae-4958-9b81-ee39f00022b7-kube-api-access-52vdk\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.329933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84d8089e-8fae-4958-9b81-ee39f00022b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.330044 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-scripts\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.330155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.330254 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84d8089e-8fae-4958-9b81-ee39f00022b7-logs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.330457 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.432350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-scripts\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.432656 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.432749 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84d8089e-8fae-4958-9b81-ee39f00022b7-logs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.432813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.432930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.433030 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-config-data\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.433186 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.433293 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vdk\" (UniqueName: \"kubernetes.io/projected/84d8089e-8fae-4958-9b81-ee39f00022b7-kube-api-access-52vdk\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.433397 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84d8089e-8fae-4958-9b81-ee39f00022b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.433611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84d8089e-8fae-4958-9b81-ee39f00022b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.434713 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84d8089e-8fae-4958-9b81-ee39f00022b7-logs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.438279 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.438865 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.439557 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.439932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-config-data\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.440279 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.440400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84d8089e-8fae-4958-9b81-ee39f00022b7-scripts\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.454442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vdk\" (UniqueName: \"kubernetes.io/projected/84d8089e-8fae-4958-9b81-ee39f00022b7-kube-api-access-52vdk\") pod \"cinder-api-0\" (UID: \"84d8089e-8fae-4958-9b81-ee39f00022b7\") " pod="openstack/cinder-api-0" Dec 02 10:36:06 crc kubenswrapper[4813]: I1202 10:36:06.505375 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 10:36:07 crc kubenswrapper[4813]: I1202 10:36:07.010993 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 10:36:07 crc kubenswrapper[4813]: W1202 10:36:07.013562 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84d8089e_8fae_4958_9b81_ee39f00022b7.slice/crio-19664c16e2cb0ef0c80da12e425d39159d8c17c81be8a6919ab512ae378ca48a WatchSource:0}: Error finding container 19664c16e2cb0ef0c80da12e425d39159d8c17c81be8a6919ab512ae378ca48a: Status 404 returned error can't find the container with id 19664c16e2cb0ef0c80da12e425d39159d8c17c81be8a6919ab512ae378ca48a Dec 02 10:36:07 crc kubenswrapper[4813]: I1202 10:36:07.126255 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"84d8089e-8fae-4958-9b81-ee39f00022b7","Type":"ContainerStarted","Data":"19664c16e2cb0ef0c80da12e425d39159d8c17c81be8a6919ab512ae378ca48a"} Dec 02 10:36:08 crc kubenswrapper[4813]: I1202 10:36:08.080093 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08967e15-8ca5-4050-837b-73450acf920e" path="/var/lib/kubelet/pods/08967e15-8ca5-4050-837b-73450acf920e/volumes" Dec 02 10:36:08 crc kubenswrapper[4813]: I1202 10:36:08.145215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"84d8089e-8fae-4958-9b81-ee39f00022b7","Type":"ContainerStarted","Data":"ba5feff9a3f8ccd0bb4be63e97187e34db8a9e5151875189237faec79f12f4e5"} Dec 02 10:36:08 crc kubenswrapper[4813]: I1202 10:36:08.145461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 10:36:08 crc kubenswrapper[4813]: I1202 10:36:08.174711 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.174692535 podStartE2EDuration="2.174692535s" podCreationTimestamp="2025-12-02 10:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:08.162486376 +0000 UTC m=+1692.357660678" watchObservedRunningTime="2025-12-02 10:36:08.174692535 +0000 UTC m=+1692.369866827" Dec 02 10:36:08 crc kubenswrapper[4813]: I1202 10:36:08.326111 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:36:08 crc kubenswrapper[4813]: I1202 10:36:08.376651 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:36:09 crc kubenswrapper[4813]: I1202 10:36:09.157927 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"84d8089e-8fae-4958-9b81-ee39f00022b7","Type":"ContainerStarted","Data":"42b8972a049bd97a3f870966d17a49f9b44eb4eac34a2b39ba2404b5ce704844"} Dec 02 10:36:09 crc kubenswrapper[4813]: I1202 10:36:09.261270 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 10:36:09 crc kubenswrapper[4813]: I1202 10:36:09.460242 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:36:09 crc kubenswrapper[4813]: I1202 10:36:09.524961 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-m7dlx"] Dec 02 10:36:09 crc kubenswrapper[4813]: I1202 10:36:09.525275 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" podUID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" containerName="dnsmasq-dns" containerID="cri-o://61ae236aa9d2ec32ce4dc321a2ff7357989c540d24008f52547aaa9bf7ca455c" gracePeriod=10 Dec 02 10:36:09 crc kubenswrapper[4813]: I1202 10:36:09.571927 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.168460 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" containerID="61ae236aa9d2ec32ce4dc321a2ff7357989c540d24008f52547aaa9bf7ca455c" exitCode=0 Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.168508 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" event={"ID":"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd","Type":"ContainerDied","Data":"61ae236aa9d2ec32ce4dc321a2ff7357989c540d24008f52547aaa9bf7ca455c"} Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.227049 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.624857 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.725653 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-dns-svc\") pod \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.725722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-sb\") pod \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.725814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-config\") pod \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.725852 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-nb\") pod \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.725915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mm5w\" (UniqueName: \"kubernetes.io/projected/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-kube-api-access-2mm5w\") pod \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\" (UID: \"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd\") " Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.747450 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-kube-api-access-2mm5w" (OuterVolumeSpecName: "kube-api-access-2mm5w") pod "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" (UID: "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd"). InnerVolumeSpecName "kube-api-access-2mm5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.776090 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-config" (OuterVolumeSpecName: "config") pod "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" (UID: "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.791717 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" (UID: "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.794300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" (UID: "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.800110 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" (UID: "1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.827252 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.827288 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.827300 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.827307 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.827317 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mm5w\" (UniqueName: \"kubernetes.io/projected/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd-kube-api-access-2mm5w\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.958235 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:36:10 crc kubenswrapper[4813]: I1202 10:36:10.966001 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cbdc6498b-rbl7g" Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.032716 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5597b77cc8-ptxss"] Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.033231 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5597b77cc8-ptxss" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api-log" containerID="cri-o://021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb" gracePeriod=30 Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.033764 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5597b77cc8-ptxss" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api" containerID="cri-o://d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d" gracePeriod=30 Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.182983 4813 generic.go:334] "Generic (PLEG): container finished" podID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerID="021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb" exitCode=143 Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.183090 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5597b77cc8-ptxss" event={"ID":"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5","Type":"ContainerDied","Data":"021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb"} Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.188001 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" event={"ID":"1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd","Type":"ContainerDied","Data":"2a79b49cc76fb2a97a6151bf73ff8aa38e44407e396cdbd63258b1166b86d294"} Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.188273 4813 scope.go:117] "RemoveContainer" containerID="61ae236aa9d2ec32ce4dc321a2ff7357989c540d24008f52547aaa9bf7ca455c" Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.188508 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-m7dlx" Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.189145 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerName="cinder-scheduler" containerID="cri-o://3948ad4da0d0301be0c5321fceb982e05f742aa3987377a4e39cd313c7db8184" gracePeriod=30 Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.189319 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerName="probe" containerID="cri-o://582e408018f687d67f921ebba2733d116566ea262cdb09a240184a15a27dab1a" gracePeriod=30 Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.223722 4813 scope.go:117] "RemoveContainer" containerID="824a85ed507d70a2390a662099c1de3113cc150fa2ea87dec978bfdd0ec78fb6" Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.238153 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-m7dlx"] Dec 02 10:36:11 crc kubenswrapper[4813]: I1202 10:36:11.252772 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-m7dlx"] Dec 02 10:36:12 crc kubenswrapper[4813]: I1202 10:36:12.081216 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" path="/var/lib/kubelet/pods/1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd/volumes" Dec 02 10:36:12 crc kubenswrapper[4813]: I1202 10:36:12.205971 4813 generic.go:334] "Generic (PLEG): container finished" podID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerID="582e408018f687d67f921ebba2733d116566ea262cdb09a240184a15a27dab1a" exitCode=0 Dec 02 10:36:12 crc kubenswrapper[4813]: I1202 10:36:12.206009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb968b35-a64e-4b5e-a954-9fe4bbd544cc","Type":"ContainerDied","Data":"582e408018f687d67f921ebba2733d116566ea262cdb09a240184a15a27dab1a"} Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.192362 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5597b77cc8-ptxss" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.151:9311/healthcheck\": read tcp 10.217.0.2:57316->10.217.0.151:9311: read: connection reset by peer" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.192358 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5597b77cc8-ptxss" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.151:9311/healthcheck\": read tcp 10.217.0.2:57304->10.217.0.151:9311: read: connection reset by peer" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.643451 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.710611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data\") pod \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.710690 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-combined-ca-bundle\") pod \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.710842 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data-custom\") pod \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.710940 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsw2d\" (UniqueName: \"kubernetes.io/projected/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-kube-api-access-dsw2d\") pod \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.710977 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-logs\") pod \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\" (UID: \"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5\") " Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.712387 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-logs" (OuterVolumeSpecName: "logs") pod "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" (UID: "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.712890 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.716921 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" (UID: "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.716953 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-kube-api-access-dsw2d" (OuterVolumeSpecName: "kube-api-access-dsw2d") pod "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" (UID: "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5"). InnerVolumeSpecName "kube-api-access-dsw2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.736281 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" (UID: "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.761632 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data" (OuterVolumeSpecName: "config-data") pod "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" (UID: "f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.814282 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.816042 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsw2d\" (UniqueName: \"kubernetes.io/projected/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-kube-api-access-dsw2d\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.816186 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:14 crc kubenswrapper[4813]: I1202 10:36:14.816263 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.234332 4813 generic.go:334] "Generic (PLEG): container finished" podID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerID="d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d" exitCode=0 Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.234408 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5597b77cc8-ptxss" event={"ID":"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5","Type":"ContainerDied","Data":"d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d"} Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.234443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5597b77cc8-ptxss" event={"ID":"f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5","Type":"ContainerDied","Data":"50782d593a5c6369559b5b8146f8987599bf4c38a2705cb8b850b85bcbdab7e7"} Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.234438 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5597b77cc8-ptxss" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.234488 4813 scope.go:117] "RemoveContainer" containerID="d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.238263 4813 generic.go:334] "Generic (PLEG): container finished" podID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerID="3948ad4da0d0301be0c5321fceb982e05f742aa3987377a4e39cd313c7db8184" exitCode=0 Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.238326 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb968b35-a64e-4b5e-a954-9fe4bbd544cc","Type":"ContainerDied","Data":"3948ad4da0d0301be0c5321fceb982e05f742aa3987377a4e39cd313c7db8184"} Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.268797 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5597b77cc8-ptxss"] Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.282579 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5597b77cc8-ptxss"] Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.332592 4813 scope.go:117] "RemoveContainer" containerID="021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.360611 4813 scope.go:117] "RemoveContainer" containerID="d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d" Dec 02 10:36:15 crc kubenswrapper[4813]: E1202 10:36:15.360982 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d\": container with ID starting with d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d not found: ID does not exist" containerID="d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.361028 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d"} err="failed to get container status \"d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d\": rpc error: code = NotFound desc = could not find container \"d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d\": container with ID starting with d25a7dbd9bbc356bf6e7eeee816784e17ec41c60b13686708fefb181c18afa1d not found: ID does not exist" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.361055 4813 scope.go:117] "RemoveContainer" containerID="021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb" Dec 02 10:36:15 crc kubenswrapper[4813]: E1202 10:36:15.361356 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb\": container with ID starting with 021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb not found: ID does not exist" containerID="021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.361390 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb"} err="failed to get container status \"021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb\": rpc error: code = NotFound desc = could not find container \"021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb\": container with ID starting with 021cd0bd9fb2b2b9932414146dd674aff786eaaf8e239ea927ca597cb5be3afb not found: ID does not exist" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.487465 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.525819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlffc\" (UniqueName: \"kubernetes.io/projected/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-kube-api-access-tlffc\") pod \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.525916 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-scripts\") pod \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.525957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data\") pod \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.525972 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-combined-ca-bundle\") pod \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.526011 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data-custom\") pod \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.526027 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-etc-machine-id\") pod \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\" (UID: \"bb968b35-a64e-4b5e-a954-9fe4bbd544cc\") " Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.526287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bb968b35-a64e-4b5e-a954-9fe4bbd544cc" (UID: "bb968b35-a64e-4b5e-a954-9fe4bbd544cc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.540314 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-kube-api-access-tlffc" (OuterVolumeSpecName: "kube-api-access-tlffc") pod "bb968b35-a64e-4b5e-a954-9fe4bbd544cc" (UID: "bb968b35-a64e-4b5e-a954-9fe4bbd544cc"). InnerVolumeSpecName "kube-api-access-tlffc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.550366 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb968b35-a64e-4b5e-a954-9fe4bbd544cc" (UID: "bb968b35-a64e-4b5e-a954-9fe4bbd544cc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.550862 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-scripts" (OuterVolumeSpecName: "scripts") pod "bb968b35-a64e-4b5e-a954-9fe4bbd544cc" (UID: "bb968b35-a64e-4b5e-a954-9fe4bbd544cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.625050 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb968b35-a64e-4b5e-a954-9fe4bbd544cc" (UID: "bb968b35-a64e-4b5e-a954-9fe4bbd544cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.627623 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.627658 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.627666 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.627675 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlffc\" (UniqueName: \"kubernetes.io/projected/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-kube-api-access-tlffc\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.627685 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.645261 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data" (OuterVolumeSpecName: "config-data") pod "bb968b35-a64e-4b5e-a954-9fe4bbd544cc" (UID: "bb968b35-a64e-4b5e-a954-9fe4bbd544cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.729401 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb968b35-a64e-4b5e-a954-9fe4bbd544cc-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.918114 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:36:15 crc kubenswrapper[4813]: I1202 10:36:15.919130 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-58f7b95df4-zpk5x" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.081872 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" path="/var/lib/kubelet/pods/f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5/volumes" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.247556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb968b35-a64e-4b5e-a954-9fe4bbd544cc","Type":"ContainerDied","Data":"e915d346e31fa7c768d7b2a4356437c4f3e8df2c992167046142a514c232d97b"} Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.247623 4813 scope.go:117] "RemoveContainer" containerID="582e408018f687d67f921ebba2733d116566ea262cdb09a240184a15a27dab1a" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.247750 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.281837 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.289343 4813 scope.go:117] "RemoveContainer" containerID="3948ad4da0d0301be0c5321fceb982e05f742aa3987377a4e39cd313c7db8184" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.291945 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.308773 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:36:16 crc kubenswrapper[4813]: E1202 10:36:16.309181 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" containerName="dnsmasq-dns" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309200 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" containerName="dnsmasq-dns" Dec 02 10:36:16 crc kubenswrapper[4813]: E1202 10:36:16.309231 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309241 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api" Dec 02 10:36:16 crc kubenswrapper[4813]: E1202 10:36:16.309256 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api-log" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309264 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api-log" Dec 02 10:36:16 crc kubenswrapper[4813]: E1202 10:36:16.309285 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" containerName="init" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309292 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" containerName="init" Dec 02 10:36:16 crc kubenswrapper[4813]: E1202 10:36:16.309307 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerName="cinder-scheduler" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309314 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerName="cinder-scheduler" Dec 02 10:36:16 crc kubenswrapper[4813]: E1202 10:36:16.309331 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerName="probe" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309338 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerName="probe" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309551 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c4eb3d5-1f3c-4beb-8170-9edbea23a2fd" containerName="dnsmasq-dns" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309569 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309584 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerName="cinder-scheduler" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309595 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" containerName="probe" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.309609 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d9b57a-f4b2-4c1d-b85a-9bb65c1ce7b5" containerName="barbican-api-log" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.310748 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.313658 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.322257 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.344197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8sql\" (UniqueName: \"kubernetes.io/projected/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-kube-api-access-b8sql\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.344262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.344295 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.344511 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-config-data\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.344619 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.344697 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-scripts\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.446314 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8sql\" (UniqueName: \"kubernetes.io/projected/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-kube-api-access-b8sql\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.446374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.446396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.446432 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-config-data\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.446471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.446509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-scripts\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.446784 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.450866 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.451185 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-scripts\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.451222 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-config-data\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.451656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.465131 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8sql\" (UniqueName: \"kubernetes.io/projected/5db6a7c9-1d30-44e1-8357-5964f5d4cb09-kube-api-access-b8sql\") pod \"cinder-scheduler-0\" (UID: \"5db6a7c9-1d30-44e1-8357-5964f5d4cb09\") " pod="openstack/cinder-scheduler-0" Dec 02 10:36:16 crc kubenswrapper[4813]: I1202 10:36:16.647285 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 10:36:17 crc kubenswrapper[4813]: I1202 10:36:17.082788 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 10:36:17 crc kubenswrapper[4813]: W1202 10:36:17.084235 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db6a7c9_1d30_44e1_8357_5964f5d4cb09.slice/crio-c845e1b5a5dc0a584b815174989444b1091468cefbb2d0b66fabc20ccbf32b24 WatchSource:0}: Error finding container c845e1b5a5dc0a584b815174989444b1091468cefbb2d0b66fabc20ccbf32b24: Status 404 returned error can't find the container with id c845e1b5a5dc0a584b815174989444b1091468cefbb2d0b66fabc20ccbf32b24 Dec 02 10:36:17 crc kubenswrapper[4813]: I1202 10:36:17.273399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5db6a7c9-1d30-44e1-8357-5964f5d4cb09","Type":"ContainerStarted","Data":"c845e1b5a5dc0a584b815174989444b1091468cefbb2d0b66fabc20ccbf32b24"} Dec 02 10:36:18 crc kubenswrapper[4813]: I1202 10:36:18.068766 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:36:18 crc kubenswrapper[4813]: E1202 10:36:18.069023 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:36:18 crc kubenswrapper[4813]: I1202 10:36:18.083476 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb968b35-a64e-4b5e-a954-9fe4bbd544cc" path="/var/lib/kubelet/pods/bb968b35-a64e-4b5e-a954-9fe4bbd544cc/volumes" Dec 02 10:36:18 crc kubenswrapper[4813]: I1202 10:36:18.299237 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5db6a7c9-1d30-44e1-8357-5964f5d4cb09","Type":"ContainerStarted","Data":"4a5faf3f30c848a52d0099b21981ef4483ff0472f7c7ce8dcf13017bb6d8d601"} Dec 02 10:36:18 crc kubenswrapper[4813]: I1202 10:36:18.571217 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-75b688f9cf-8gj5w" Dec 02 10:36:18 crc kubenswrapper[4813]: I1202 10:36:18.614482 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 02 10:36:19 crc kubenswrapper[4813]: I1202 10:36:19.312916 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5db6a7c9-1d30-44e1-8357-5964f5d4cb09","Type":"ContainerStarted","Data":"4fad5f25bf4baf12a1f01a7edec96d24441d2868429cdfb6c974b9b8abb68dd6"} Dec 02 10:36:19 crc kubenswrapper[4813]: I1202 10:36:19.331216 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.331196039 podStartE2EDuration="3.331196039s" podCreationTimestamp="2025-12-02 10:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:19.330748466 +0000 UTC m=+1703.525922778" watchObservedRunningTime="2025-12-02 10:36:19.331196039 +0000 UTC m=+1703.526370361" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.085191 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.086634 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.088463 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9mjq7" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.088733 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.088849 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.111266 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.122345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config-secret\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.122420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.122462 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlh7j\" (UniqueName: \"kubernetes.io/projected/d85ad63a-cec1-49e2-bb25-db26eccfb858-kube-api-access-xlh7j\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.122532 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.223670 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.223770 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config-secret\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.223804 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.223831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlh7j\" (UniqueName: \"kubernetes.io/projected/d85ad63a-cec1-49e2-bb25-db26eccfb858-kube-api-access-xlh7j\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.224545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.229564 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config-secret\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.229948 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.243128 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlh7j\" (UniqueName: \"kubernetes.io/projected/d85ad63a-cec1-49e2-bb25-db26eccfb858-kube-api-access-xlh7j\") pod \"openstackclient\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.370054 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.375440 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.396731 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.406418 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.407651 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.415001 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.455063 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9k7s\" (UniqueName: \"kubernetes.io/projected/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-kube-api-access-f9k7s\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.455158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-openstack-config-secret\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.455228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.455254 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-openstack-config\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: E1202 10:36:20.488351 4813 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 02 10:36:20 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_d85ad63a-cec1-49e2-bb25-db26eccfb858_0(fb094e5eee1a2d97daad274e5ff4f0c0fca5bca4a3008888f92129e3bf21756b): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb094e5eee1a2d97daad274e5ff4f0c0fca5bca4a3008888f92129e3bf21756b" Netns:"/var/run/netns/7069f3f3-cc85-42e9-913a-45f96bd2dbdd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=fb094e5eee1a2d97daad274e5ff4f0c0fca5bca4a3008888f92129e3bf21756b;K8S_POD_UID=d85ad63a-cec1-49e2-bb25-db26eccfb858" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/d85ad63a-cec1-49e2-bb25-db26eccfb858]: expected pod UID "d85ad63a-cec1-49e2-bb25-db26eccfb858" but got "7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f" from Kube API Dec 02 10:36:20 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 02 10:36:20 crc kubenswrapper[4813]: > Dec 02 10:36:20 crc kubenswrapper[4813]: E1202 10:36:20.488443 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 02 10:36:20 crc kubenswrapper[4813]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_d85ad63a-cec1-49e2-bb25-db26eccfb858_0(fb094e5eee1a2d97daad274e5ff4f0c0fca5bca4a3008888f92129e3bf21756b): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb094e5eee1a2d97daad274e5ff4f0c0fca5bca4a3008888f92129e3bf21756b" Netns:"/var/run/netns/7069f3f3-cc85-42e9-913a-45f96bd2dbdd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=fb094e5eee1a2d97daad274e5ff4f0c0fca5bca4a3008888f92129e3bf21756b;K8S_POD_UID=d85ad63a-cec1-49e2-bb25-db26eccfb858" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/d85ad63a-cec1-49e2-bb25-db26eccfb858]: expected pod UID "d85ad63a-cec1-49e2-bb25-db26eccfb858" but got "7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f" from Kube API Dec 02 10:36:20 crc kubenswrapper[4813]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 02 10:36:20 crc kubenswrapper[4813]: > pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.556614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9k7s\" (UniqueName: \"kubernetes.io/projected/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-kube-api-access-f9k7s\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.556705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-openstack-config-secret\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.556800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.556833 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-openstack-config\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.558002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-openstack-config\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.561054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-openstack-config-secret\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.563611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.571454 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9k7s\" (UniqueName: \"kubernetes.io/projected/7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f-kube-api-access-f9k7s\") pod \"openstackclient\" (UID: \"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f\") " pod="openstack/openstackclient" Dec 02 10:36:20 crc kubenswrapper[4813]: I1202 10:36:20.826291 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.260183 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.333190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f","Type":"ContainerStarted","Data":"f8a8462b748f7af4fa26a54e5c6b2175f368b2650b9c74d7f93687b13bfc288a"} Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.333216 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.336351 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d85ad63a-cec1-49e2-bb25-db26eccfb858" podUID="7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.350827 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.473267 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-combined-ca-bundle\") pod \"d85ad63a-cec1-49e2-bb25-db26eccfb858\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.473597 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config-secret\") pod \"d85ad63a-cec1-49e2-bb25-db26eccfb858\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.473674 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config\") pod \"d85ad63a-cec1-49e2-bb25-db26eccfb858\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.473797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlh7j\" (UniqueName: \"kubernetes.io/projected/d85ad63a-cec1-49e2-bb25-db26eccfb858-kube-api-access-xlh7j\") pod \"d85ad63a-cec1-49e2-bb25-db26eccfb858\" (UID: \"d85ad63a-cec1-49e2-bb25-db26eccfb858\") " Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.474249 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d85ad63a-cec1-49e2-bb25-db26eccfb858" (UID: "d85ad63a-cec1-49e2-bb25-db26eccfb858"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.479934 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d85ad63a-cec1-49e2-bb25-db26eccfb858" (UID: "d85ad63a-cec1-49e2-bb25-db26eccfb858"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.480129 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d85ad63a-cec1-49e2-bb25-db26eccfb858" (UID: "d85ad63a-cec1-49e2-bb25-db26eccfb858"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.484262 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85ad63a-cec1-49e2-bb25-db26eccfb858-kube-api-access-xlh7j" (OuterVolumeSpecName: "kube-api-access-xlh7j") pod "d85ad63a-cec1-49e2-bb25-db26eccfb858" (UID: "d85ad63a-cec1-49e2-bb25-db26eccfb858"). InnerVolumeSpecName "kube-api-access-xlh7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.575872 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.575905 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.575918 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d85ad63a-cec1-49e2-bb25-db26eccfb858-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.575928 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlh7j\" (UniqueName: \"kubernetes.io/projected/d85ad63a-cec1-49e2-bb25-db26eccfb858-kube-api-access-xlh7j\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:21 crc kubenswrapper[4813]: I1202 10:36:21.648241 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 10:36:22 crc kubenswrapper[4813]: I1202 10:36:22.077863 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85ad63a-cec1-49e2-bb25-db26eccfb858" path="/var/lib/kubelet/pods/d85ad63a-cec1-49e2-bb25-db26eccfb858/volumes" Dec 02 10:36:22 crc kubenswrapper[4813]: I1202 10:36:22.265814 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:36:22 crc kubenswrapper[4813]: I1202 10:36:22.341667 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 10:36:22 crc kubenswrapper[4813]: I1202 10:36:22.347293 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d85ad63a-cec1-49e2-bb25-db26eccfb858" podUID="7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f" Dec 02 10:36:23 crc kubenswrapper[4813]: I1202 10:36:23.557873 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:23 crc kubenswrapper[4813]: I1202 10:36:23.558526 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="ceilometer-notification-agent" containerID="cri-o://eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6" gracePeriod=30 Dec 02 10:36:23 crc kubenswrapper[4813]: I1202 10:36:23.558553 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="sg-core" containerID="cri-o://2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29" gracePeriod=30 Dec 02 10:36:23 crc kubenswrapper[4813]: I1202 10:36:23.558488 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="ceilometer-central-agent" containerID="cri-o://f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779" gracePeriod=30 Dec 02 10:36:23 crc kubenswrapper[4813]: I1202 10:36:23.558564 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="proxy-httpd" containerID="cri-o://8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3" gracePeriod=30 Dec 02 10:36:23 crc kubenswrapper[4813]: I1202 10:36:23.568813 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.146:3000/\": EOF" Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.130255 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.146:3000/\": dial tcp 10.217.0.146:3000: connect: connection refused" Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.365093 4813 generic.go:334] "Generic (PLEG): container finished" podID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerID="8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3" exitCode=0 Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.365128 4813 generic.go:334] "Generic (PLEG): container finished" podID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerID="2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29" exitCode=2 Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.365137 4813 generic.go:334] "Generic (PLEG): container finished" podID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerID="f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779" exitCode=0 Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.365158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerDied","Data":"8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3"} Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.365183 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerDied","Data":"2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29"} Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.365193 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerDied","Data":"f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779"} Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.775928 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5764b7874f-mhh86" Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.853587 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c87584f6d-77bdt"] Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.853786 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c87584f6d-77bdt" podUID="368ae054-4a5a-4187-a078-e1db70e84741" containerName="neutron-api" containerID="cri-o://0608141ae8fe5118f289835c3b1426426dd7de32fa5c6aa0d6155a04e2ea5cb0" gracePeriod=30 Dec 02 10:36:24 crc kubenswrapper[4813]: I1202 10:36:24.854159 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c87584f6d-77bdt" podUID="368ae054-4a5a-4187-a078-e1db70e84741" containerName="neutron-httpd" containerID="cri-o://dc9c9ef7925c57c5b5671e409ccc460c4a293e3fbf7b33465f6a2afb1d10db9f" gracePeriod=30 Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.375924 4813 generic.go:334] "Generic (PLEG): container finished" podID="368ae054-4a5a-4187-a078-e1db70e84741" containerID="dc9c9ef7925c57c5b5671e409ccc460c4a293e3fbf7b33465f6a2afb1d10db9f" exitCode=0 Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.375977 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c87584f6d-77bdt" event={"ID":"368ae054-4a5a-4187-a078-e1db70e84741","Type":"ContainerDied","Data":"dc9c9ef7925c57c5b5671e409ccc460c4a293e3fbf7b33465f6a2afb1d10db9f"} Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.776341 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.875658 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-run-httpd\") pod \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.875755 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-sg-core-conf-yaml\") pod \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.875794 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-log-httpd\") pod \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.875862 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vmm4\" (UniqueName: \"kubernetes.io/projected/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-kube-api-access-2vmm4\") pod \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.875903 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-combined-ca-bundle\") pod \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.875926 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-config-data\") pod \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.875995 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-scripts\") pod \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\" (UID: \"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9\") " Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.876357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" (UID: "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.876819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" (UID: "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.883888 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-kube-api-access-2vmm4" (OuterVolumeSpecName: "kube-api-access-2vmm4") pod "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" (UID: "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9"). InnerVolumeSpecName "kube-api-access-2vmm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.884040 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-scripts" (OuterVolumeSpecName: "scripts") pod "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" (UID: "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.906751 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" (UID: "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.955617 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" (UID: "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.977685 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vmm4\" (UniqueName: \"kubernetes.io/projected/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-kube-api-access-2vmm4\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.977726 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.977737 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.977746 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.977755 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:25 crc kubenswrapper[4813]: I1202 10:36:25.977762 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.002322 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-config-data" (OuterVolumeSpecName: "config-data") pod "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" (UID: "4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.079145 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.385285 4813 generic.go:334] "Generic (PLEG): container finished" podID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerID="eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6" exitCode=0 Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.385330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerDied","Data":"eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6"} Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.385358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9","Type":"ContainerDied","Data":"8cb586dc5b9596d7c7a44fdf9478a1656f320b5ab5eefa5b5adc22ef6519f8b8"} Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.385366 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.385375 4813 scope.go:117] "RemoveContainer" containerID="8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.409864 4813 scope.go:117] "RemoveContainer" containerID="2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.412693 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.440342 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.449544 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:26 crc kubenswrapper[4813]: E1202 10:36:26.449878 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="sg-core" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.449894 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="sg-core" Dec 02 10:36:26 crc kubenswrapper[4813]: E1202 10:36:26.449911 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="ceilometer-notification-agent" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.449918 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="ceilometer-notification-agent" Dec 02 10:36:26 crc kubenswrapper[4813]: E1202 10:36:26.449936 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="ceilometer-central-agent" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.449944 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="ceilometer-central-agent" Dec 02 10:36:26 crc kubenswrapper[4813]: E1202 10:36:26.449956 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="proxy-httpd" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.449961 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="proxy-httpd" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.450127 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="sg-core" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.450143 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="ceilometer-notification-agent" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.450153 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="proxy-httpd" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.450163 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" containerName="ceilometer-central-agent" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.451704 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.453655 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.456961 4813 scope.go:117] "RemoveContainer" containerID="eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.461831 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.462724 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.488279 4813 scope.go:117] "RemoveContainer" containerID="f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.582253 4813 scope.go:117] "RemoveContainer" containerID="8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3" Dec 02 10:36:26 crc kubenswrapper[4813]: E1202 10:36:26.586230 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3\": container with ID starting with 8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3 not found: ID does not exist" containerID="8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.586281 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3"} err="failed to get container status \"8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3\": rpc error: code = NotFound desc = could not find container \"8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3\": container with ID starting with 8d66fc25734b1590f4656576aca978a2767031c1fb422857998005d1e2218cd3 not found: ID does not exist" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.586313 4813 scope.go:117] "RemoveContainer" containerID="2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29" Dec 02 10:36:26 crc kubenswrapper[4813]: E1202 10:36:26.590472 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29\": container with ID starting with 2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29 not found: ID does not exist" containerID="2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.590519 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29"} err="failed to get container status \"2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29\": rpc error: code = NotFound desc = could not find container \"2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29\": container with ID starting with 2a07914d7276afbfa8f1312fb83e05d7ec070c9080fbacaca774655d42a1ac29 not found: ID does not exist" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.590557 4813 scope.go:117] "RemoveContainer" containerID="eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.590670 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.590802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-config-data\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.590975 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-log-httpd\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.591117 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.591143 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-run-httpd\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.591172 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248rk\" (UniqueName: \"kubernetes.io/projected/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-kube-api-access-248rk\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.591200 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-scripts\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: E1202 10:36:26.594387 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6\": container with ID starting with eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6 not found: ID does not exist" containerID="eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.594420 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6"} err="failed to get container status \"eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6\": rpc error: code = NotFound desc = could not find container \"eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6\": container with ID starting with eeb5203356b1f44f28b0927cf2897837a64155b819625f4dc652f3a9175782c6 not found: ID does not exist" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.594439 4813 scope.go:117] "RemoveContainer" containerID="f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779" Dec 02 10:36:26 crc kubenswrapper[4813]: E1202 10:36:26.606268 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779\": container with ID starting with f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779 not found: ID does not exist" containerID="f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.606306 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779"} err="failed to get container status \"f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779\": rpc error: code = NotFound desc = could not find container \"f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779\": container with ID starting with f5019ee3487d7338736d81f752e7adf02a2b95bcf491eb75372c11e1e2dd9779 not found: ID does not exist" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.692501 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-config-data\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.692592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-log-httpd\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.692666 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.692692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-run-httpd\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.692725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-248rk\" (UniqueName: \"kubernetes.io/projected/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-kube-api-access-248rk\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.692763 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-scripts\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.692821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.693203 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-run-httpd\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.693804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-log-httpd\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.698389 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-scripts\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.698637 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-config-data\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.699217 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.705804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.712658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-248rk\" (UniqueName: \"kubernetes.io/projected/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-kube-api-access-248rk\") pod \"ceilometer-0\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.785927 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:26 crc kubenswrapper[4813]: I1202 10:36:26.876222 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 10:36:27 crc kubenswrapper[4813]: I1202 10:36:27.295747 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:27 crc kubenswrapper[4813]: W1202 10:36:27.301408 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3106ec2d_c61c_4632_97d7_c1fd2056a1ac.slice/crio-db634ad238334e7de4da047e7708fe597e33b50fa7f1ad1e1ac1be016c07a9b8 WatchSource:0}: Error finding container db634ad238334e7de4da047e7708fe597e33b50fa7f1ad1e1ac1be016c07a9b8: Status 404 returned error can't find the container with id db634ad238334e7de4da047e7708fe597e33b50fa7f1ad1e1ac1be016c07a9b8 Dec 02 10:36:27 crc kubenswrapper[4813]: I1202 10:36:27.399334 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerStarted","Data":"db634ad238334e7de4da047e7708fe597e33b50fa7f1ad1e1ac1be016c07a9b8"} Dec 02 10:36:27 crc kubenswrapper[4813]: I1202 10:36:27.808447 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:28 crc kubenswrapper[4813]: I1202 10:36:28.077708 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9" path="/var/lib/kubelet/pods/4e4ea4b5-e701-4959-9f2d-ed0b16ba82a9/volumes" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.705910 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qnt25"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.707845 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.717106 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qnt25"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.740511 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f2e281-93f1-4265-9590-d90489b8fb83-operator-scripts\") pod \"nova-api-db-create-qnt25\" (UID: \"f2f2e281-93f1-4265-9590-d90489b8fb83\") " pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.740629 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkwz\" (UniqueName: \"kubernetes.io/projected/f2f2e281-93f1-4265-9590-d90489b8fb83-kube-api-access-tjkwz\") pod \"nova-api-db-create-qnt25\" (UID: \"f2f2e281-93f1-4265-9590-d90489b8fb83\") " pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.779859 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-67g6c"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.782905 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.808509 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fec7-account-create-update-ltbqz"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.810839 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.817553 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.833371 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fec7-account-create-update-ltbqz"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.842580 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-operator-scripts\") pod \"nova-api-fec7-account-create-update-ltbqz\" (UID: \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\") " pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.843614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjkwz\" (UniqueName: \"kubernetes.io/projected/f2f2e281-93f1-4265-9590-d90489b8fb83-kube-api-access-tjkwz\") pod \"nova-api-db-create-qnt25\" (UID: \"f2f2e281-93f1-4265-9590-d90489b8fb83\") " pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.843721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4k9l\" (UniqueName: \"kubernetes.io/projected/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-kube-api-access-h4k9l\") pod \"nova-api-fec7-account-create-update-ltbqz\" (UID: \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\") " pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.843774 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21b686f-4092-4e67-9883-a7489038286c-operator-scripts\") pod \"nova-cell0-db-create-67g6c\" (UID: \"d21b686f-4092-4e67-9883-a7489038286c\") " pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.843856 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwpmw\" (UniqueName: \"kubernetes.io/projected/d21b686f-4092-4e67-9883-a7489038286c-kube-api-access-jwpmw\") pod \"nova-cell0-db-create-67g6c\" (UID: \"d21b686f-4092-4e67-9883-a7489038286c\") " pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.843971 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f2e281-93f1-4265-9590-d90489b8fb83-operator-scripts\") pod \"nova-api-db-create-qnt25\" (UID: \"f2f2e281-93f1-4265-9590-d90489b8fb83\") " pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.845099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f2e281-93f1-4265-9590-d90489b8fb83-operator-scripts\") pod \"nova-api-db-create-qnt25\" (UID: \"f2f2e281-93f1-4265-9590-d90489b8fb83\") " pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.848585 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-67g6c"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.875785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjkwz\" (UniqueName: \"kubernetes.io/projected/f2f2e281-93f1-4265-9590-d90489b8fb83-kube-api-access-tjkwz\") pod \"nova-api-db-create-qnt25\" (UID: \"f2f2e281-93f1-4265-9590-d90489b8fb83\") " pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.892438 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-whrfm"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.903949 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-whrfm"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.904052 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.945919 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4k9l\" (UniqueName: \"kubernetes.io/projected/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-kube-api-access-h4k9l\") pod \"nova-api-fec7-account-create-update-ltbqz\" (UID: \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\") " pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.945986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21b686f-4092-4e67-9883-a7489038286c-operator-scripts\") pod \"nova-cell0-db-create-67g6c\" (UID: \"d21b686f-4092-4e67-9883-a7489038286c\") " pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.946202 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwpmw\" (UniqueName: \"kubernetes.io/projected/d21b686f-4092-4e67-9883-a7489038286c-kube-api-access-jwpmw\") pod \"nova-cell0-db-create-67g6c\" (UID: \"d21b686f-4092-4e67-9883-a7489038286c\") " pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.946446 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-operator-scripts\") pod \"nova-api-fec7-account-create-update-ltbqz\" (UID: \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\") " pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.946836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21b686f-4092-4e67-9883-a7489038286c-operator-scripts\") pod \"nova-cell0-db-create-67g6c\" (UID: \"d21b686f-4092-4e67-9883-a7489038286c\") " pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.947429 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-operator-scripts\") pod \"nova-api-fec7-account-create-update-ltbqz\" (UID: \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\") " pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.977387 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d58b-account-create-update-v9r6z"] Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.978859 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.980820 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.981607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4k9l\" (UniqueName: \"kubernetes.io/projected/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-kube-api-access-h4k9l\") pod \"nova-api-fec7-account-create-update-ltbqz\" (UID: \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\") " pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.984610 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwpmw\" (UniqueName: \"kubernetes.io/projected/d21b686f-4092-4e67-9883-a7489038286c-kube-api-access-jwpmw\") pod \"nova-cell0-db-create-67g6c\" (UID: \"d21b686f-4092-4e67-9883-a7489038286c\") " pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:30 crc kubenswrapper[4813]: I1202 10:36:30.992273 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d58b-account-create-update-v9r6z"] Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.029499 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.049119 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40614701-94ba-4e26-bd54-c5f04422c5fa-operator-scripts\") pod \"nova-cell1-db-create-whrfm\" (UID: \"40614701-94ba-4e26-bd54-c5f04422c5fa\") " pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.049219 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbm86\" (UniqueName: \"kubernetes.io/projected/40614701-94ba-4e26-bd54-c5f04422c5fa-kube-api-access-vbm86\") pod \"nova-cell1-db-create-whrfm\" (UID: \"40614701-94ba-4e26-bd54-c5f04422c5fa\") " pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.130572 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.144956 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.150589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40614701-94ba-4e26-bd54-c5f04422c5fa-operator-scripts\") pod \"nova-cell1-db-create-whrfm\" (UID: \"40614701-94ba-4e26-bd54-c5f04422c5fa\") " pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.150695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbm86\" (UniqueName: \"kubernetes.io/projected/40614701-94ba-4e26-bd54-c5f04422c5fa-kube-api-access-vbm86\") pod \"nova-cell1-db-create-whrfm\" (UID: \"40614701-94ba-4e26-bd54-c5f04422c5fa\") " pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.150937 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjbpm\" (UniqueName: \"kubernetes.io/projected/8315d930-9aff-4394-9711-042ed1e4de69-kube-api-access-xjbpm\") pod \"nova-cell0-d58b-account-create-update-v9r6z\" (UID: \"8315d930-9aff-4394-9711-042ed1e4de69\") " pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.150976 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8315d930-9aff-4394-9711-042ed1e4de69-operator-scripts\") pod \"nova-cell0-d58b-account-create-update-v9r6z\" (UID: \"8315d930-9aff-4394-9711-042ed1e4de69\") " pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.151622 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40614701-94ba-4e26-bd54-c5f04422c5fa-operator-scripts\") pod \"nova-cell1-db-create-whrfm\" (UID: \"40614701-94ba-4e26-bd54-c5f04422c5fa\") " pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.178167 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbm86\" (UniqueName: \"kubernetes.io/projected/40614701-94ba-4e26-bd54-c5f04422c5fa-kube-api-access-vbm86\") pod \"nova-cell1-db-create-whrfm\" (UID: \"40614701-94ba-4e26-bd54-c5f04422c5fa\") " pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.189918 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0305-account-create-update-gck59"] Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.191535 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.193431 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.203391 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0305-account-create-update-gck59"] Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.226480 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.255330 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjbpm\" (UniqueName: \"kubernetes.io/projected/8315d930-9aff-4394-9711-042ed1e4de69-kube-api-access-xjbpm\") pod \"nova-cell0-d58b-account-create-update-v9r6z\" (UID: \"8315d930-9aff-4394-9711-042ed1e4de69\") " pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.255441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8315d930-9aff-4394-9711-042ed1e4de69-operator-scripts\") pod \"nova-cell0-d58b-account-create-update-v9r6z\" (UID: \"8315d930-9aff-4394-9711-042ed1e4de69\") " pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.257499 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8315d930-9aff-4394-9711-042ed1e4de69-operator-scripts\") pod \"nova-cell0-d58b-account-create-update-v9r6z\" (UID: \"8315d930-9aff-4394-9711-042ed1e4de69\") " pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.272141 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjbpm\" (UniqueName: \"kubernetes.io/projected/8315d930-9aff-4394-9711-042ed1e4de69-kube-api-access-xjbpm\") pod \"nova-cell0-d58b-account-create-update-v9r6z\" (UID: \"8315d930-9aff-4394-9711-042ed1e4de69\") " pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.355432 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.356800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmd4s\" (UniqueName: \"kubernetes.io/projected/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-kube-api-access-lmd4s\") pod \"nova-cell1-0305-account-create-update-gck59\" (UID: \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\") " pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.356837 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-operator-scripts\") pod \"nova-cell1-0305-account-create-update-gck59\" (UID: \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\") " pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.458847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmd4s\" (UniqueName: \"kubernetes.io/projected/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-kube-api-access-lmd4s\") pod \"nova-cell1-0305-account-create-update-gck59\" (UID: \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\") " pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.458937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-operator-scripts\") pod \"nova-cell1-0305-account-create-update-gck59\" (UID: \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\") " pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.459721 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-operator-scripts\") pod \"nova-cell1-0305-account-create-update-gck59\" (UID: \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\") " pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.476796 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmd4s\" (UniqueName: \"kubernetes.io/projected/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-kube-api-access-lmd4s\") pod \"nova-cell1-0305-account-create-update-gck59\" (UID: \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\") " pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:31 crc kubenswrapper[4813]: I1202 10:36:31.535427 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:32 crc kubenswrapper[4813]: I1202 10:36:32.067767 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:36:32 crc kubenswrapper[4813]: E1202 10:36:32.068059 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:36:32 crc kubenswrapper[4813]: I1202 10:36:32.454965 4813 generic.go:334] "Generic (PLEG): container finished" podID="368ae054-4a5a-4187-a078-e1db70e84741" containerID="0608141ae8fe5118f289835c3b1426426dd7de32fa5c6aa0d6155a04e2ea5cb0" exitCode=0 Dec 02 10:36:32 crc kubenswrapper[4813]: I1202 10:36:32.455008 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c87584f6d-77bdt" event={"ID":"368ae054-4a5a-4187-a078-e1db70e84741","Type":"ContainerDied","Data":"0608141ae8fe5118f289835c3b1426426dd7de32fa5c6aa0d6155a04e2ea5cb0"} Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.564426 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.706084 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-combined-ca-bundle\") pod \"368ae054-4a5a-4187-a078-e1db70e84741\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.706176 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-httpd-config\") pod \"368ae054-4a5a-4187-a078-e1db70e84741\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.706288 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-ovndb-tls-certs\") pod \"368ae054-4a5a-4187-a078-e1db70e84741\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.706328 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-config\") pod \"368ae054-4a5a-4187-a078-e1db70e84741\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.706394 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmvfn\" (UniqueName: \"kubernetes.io/projected/368ae054-4a5a-4187-a078-e1db70e84741-kube-api-access-hmvfn\") pod \"368ae054-4a5a-4187-a078-e1db70e84741\" (UID: \"368ae054-4a5a-4187-a078-e1db70e84741\") " Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.711688 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "368ae054-4a5a-4187-a078-e1db70e84741" (UID: "368ae054-4a5a-4187-a078-e1db70e84741"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.712190 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368ae054-4a5a-4187-a078-e1db70e84741-kube-api-access-hmvfn" (OuterVolumeSpecName: "kube-api-access-hmvfn") pod "368ae054-4a5a-4187-a078-e1db70e84741" (UID: "368ae054-4a5a-4187-a078-e1db70e84741"). InnerVolumeSpecName "kube-api-access-hmvfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.756354 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-config" (OuterVolumeSpecName: "config") pod "368ae054-4a5a-4187-a078-e1db70e84741" (UID: "368ae054-4a5a-4187-a078-e1db70e84741"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.757000 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "368ae054-4a5a-4187-a078-e1db70e84741" (UID: "368ae054-4a5a-4187-a078-e1db70e84741"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.795954 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "368ae054-4a5a-4187-a078-e1db70e84741" (UID: "368ae054-4a5a-4187-a078-e1db70e84741"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.807866 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fec7-account-create-update-ltbqz"] Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.809367 4813 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.809402 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.809413 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmvfn\" (UniqueName: \"kubernetes.io/projected/368ae054-4a5a-4187-a078-e1db70e84741-kube-api-access-hmvfn\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.809425 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.809435 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/368ae054-4a5a-4187-a078-e1db70e84741-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.827602 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-67g6c"] Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.864198 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qnt25"] Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.960457 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0305-account-create-update-gck59"] Dec 02 10:36:33 crc kubenswrapper[4813]: I1202 10:36:33.980688 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-whrfm"] Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.111621 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d58b-account-create-update-v9r6z"] Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.482482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c87584f6d-77bdt" event={"ID":"368ae054-4a5a-4187-a078-e1db70e84741","Type":"ContainerDied","Data":"3efc5168f0105927cdf37db0557a13951e5c5596f09e6b276c12fa833f387c77"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.482868 4813 scope.go:117] "RemoveContainer" containerID="dc9c9ef7925c57c5b5671e409ccc460c4a293e3fbf7b33465f6a2afb1d10db9f" Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.482581 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c87584f6d-77bdt" Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.485112 4813 generic.go:334] "Generic (PLEG): container finished" podID="8cc7289e-4a0c-4273-9d08-89b05ea88cb2" containerID="94c2ae9ce4c805a17f3d25cb9a9ad16a2dab3bf460058bae8aa9ebbfbcf9d2e4" exitCode=0 Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.485228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fec7-account-create-update-ltbqz" event={"ID":"8cc7289e-4a0c-4273-9d08-89b05ea88cb2","Type":"ContainerDied","Data":"94c2ae9ce4c805a17f3d25cb9a9ad16a2dab3bf460058bae8aa9ebbfbcf9d2e4"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.485312 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fec7-account-create-update-ltbqz" event={"ID":"8cc7289e-4a0c-4273-9d08-89b05ea88cb2","Type":"ContainerStarted","Data":"43fbef09173dc0816f4f1563aec32aa76b179e8e977d8f9a98fad68f2d54ff2e"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.496987 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2f2e281-93f1-4265-9590-d90489b8fb83" containerID="57c01f7a0e990127dcb4d6408f1c8010a68abc9dae55def79f90097b280ade5e" exitCode=0 Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.497054 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnt25" event={"ID":"f2f2e281-93f1-4265-9590-d90489b8fb83","Type":"ContainerDied","Data":"57c01f7a0e990127dcb4d6408f1c8010a68abc9dae55def79f90097b280ade5e"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.497094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnt25" event={"ID":"f2f2e281-93f1-4265-9590-d90489b8fb83","Type":"ContainerStarted","Data":"30070979127cb85dd5897b4e8166a6fdcbb903c2441f1a4853c3a9c7a6ebb6c3"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.499164 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f","Type":"ContainerStarted","Data":"246cbe9985fdac3e2f7158c22816e8af4d2499285c6bfffc20205e6ac1bb5e9a"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.502823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" event={"ID":"8315d930-9aff-4394-9711-042ed1e4de69","Type":"ContainerStarted","Data":"5cc045b2d1289035698962d5a53f9b79f283acdcab045cc1b94422ba8e96970f"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.502860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" event={"ID":"8315d930-9aff-4394-9711-042ed1e4de69","Type":"ContainerStarted","Data":"35f750af950398f3deedf7347a0b812e49272f53e8d66db8798047046ae3e5a4"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.512234 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerStarted","Data":"ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.520281 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-whrfm" event={"ID":"40614701-94ba-4e26-bd54-c5f04422c5fa","Type":"ContainerStarted","Data":"bfd85f6b28b7a89527f1976baac297dffd44b7ee257bc88e4eb2fe78d1c5b0c7"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.520338 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-whrfm" event={"ID":"40614701-94ba-4e26-bd54-c5f04422c5fa","Type":"ContainerStarted","Data":"40e2fdb5aca2b4ee2680b4db40b921527b4974a0a318d1efcb0471cbcbaa4c05"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.527657 4813 scope.go:117] "RemoveContainer" containerID="0608141ae8fe5118f289835c3b1426426dd7de32fa5c6aa0d6155a04e2ea5cb0" Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.527896 4813 generic.go:334] "Generic (PLEG): container finished" podID="d21b686f-4092-4e67-9883-a7489038286c" containerID="6be1abe79456acd0225b1bd4877db818029e8822f156c4d7c872ca38fa8a1368" exitCode=0 Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.528038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-67g6c" event={"ID":"d21b686f-4092-4e67-9883-a7489038286c","Type":"ContainerDied","Data":"6be1abe79456acd0225b1bd4877db818029e8822f156c4d7c872ca38fa8a1368"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.528083 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-67g6c" event={"ID":"d21b686f-4092-4e67-9883-a7489038286c","Type":"ContainerStarted","Data":"ccb216a91f81ce75718aac1ea4066c320d114178c48383b52f418ebda1e647e9"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.537907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0305-account-create-update-gck59" event={"ID":"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b","Type":"ContainerStarted","Data":"95fe92e373406952c5b80e6bcb0d675d66f9f6131deadd4a7d8ae4f9655b3555"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.537958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0305-account-create-update-gck59" event={"ID":"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b","Type":"ContainerStarted","Data":"565ad63329da3646f175cb67bfe8c4abdab840dae2bc5455379015f63b747956"} Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.543903 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.53217105 podStartE2EDuration="14.543886532s" podCreationTimestamp="2025-12-02 10:36:20 +0000 UTC" firstStartedPulling="2025-12-02 10:36:21.266638064 +0000 UTC m=+1705.461812366" lastFinishedPulling="2025-12-02 10:36:33.278353536 +0000 UTC m=+1717.473527848" observedRunningTime="2025-12-02 10:36:34.537580812 +0000 UTC m=+1718.732755114" watchObservedRunningTime="2025-12-02 10:36:34.543886532 +0000 UTC m=+1718.739060824" Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.562478 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-whrfm" podStartSLOduration=4.562461933 podStartE2EDuration="4.562461933s" podCreationTimestamp="2025-12-02 10:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:34.554973229 +0000 UTC m=+1718.750147531" watchObservedRunningTime="2025-12-02 10:36:34.562461933 +0000 UTC m=+1718.757636235" Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.628363 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" podStartSLOduration=4.628335946 podStartE2EDuration="4.628335946s" podCreationTimestamp="2025-12-02 10:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:34.601578591 +0000 UTC m=+1718.796752913" watchObservedRunningTime="2025-12-02 10:36:34.628335946 +0000 UTC m=+1718.823510258" Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.653014 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c87584f6d-77bdt"] Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.666027 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-0305-account-create-update-gck59" podStartSLOduration=3.666011963 podStartE2EDuration="3.666011963s" podCreationTimestamp="2025-12-02 10:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:36:34.638306041 +0000 UTC m=+1718.833480343" watchObservedRunningTime="2025-12-02 10:36:34.666011963 +0000 UTC m=+1718.861186265" Dec 02 10:36:34 crc kubenswrapper[4813]: I1202 10:36:34.666522 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c87584f6d-77bdt"] Dec 02 10:36:35 crc kubenswrapper[4813]: I1202 10:36:35.548853 4813 generic.go:334] "Generic (PLEG): container finished" podID="40614701-94ba-4e26-bd54-c5f04422c5fa" containerID="bfd85f6b28b7a89527f1976baac297dffd44b7ee257bc88e4eb2fe78d1c5b0c7" exitCode=0 Dec 02 10:36:35 crc kubenswrapper[4813]: I1202 10:36:35.548899 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-whrfm" event={"ID":"40614701-94ba-4e26-bd54-c5f04422c5fa","Type":"ContainerDied","Data":"bfd85f6b28b7a89527f1976baac297dffd44b7ee257bc88e4eb2fe78d1c5b0c7"} Dec 02 10:36:35 crc kubenswrapper[4813]: I1202 10:36:35.551363 4813 generic.go:334] "Generic (PLEG): container finished" podID="f60b319e-ff22-4ef4-a0b7-02f50ef06c3b" containerID="95fe92e373406952c5b80e6bcb0d675d66f9f6131deadd4a7d8ae4f9655b3555" exitCode=0 Dec 02 10:36:35 crc kubenswrapper[4813]: I1202 10:36:35.551455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0305-account-create-update-gck59" event={"ID":"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b","Type":"ContainerDied","Data":"95fe92e373406952c5b80e6bcb0d675d66f9f6131deadd4a7d8ae4f9655b3555"} Dec 02 10:36:35 crc kubenswrapper[4813]: I1202 10:36:35.553105 4813 generic.go:334] "Generic (PLEG): container finished" podID="8315d930-9aff-4394-9711-042ed1e4de69" containerID="5cc045b2d1289035698962d5a53f9b79f283acdcab045cc1b94422ba8e96970f" exitCode=0 Dec 02 10:36:35 crc kubenswrapper[4813]: I1202 10:36:35.553174 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" event={"ID":"8315d930-9aff-4394-9711-042ed1e4de69","Type":"ContainerDied","Data":"5cc045b2d1289035698962d5a53f9b79f283acdcab045cc1b94422ba8e96970f"} Dec 02 10:36:35 crc kubenswrapper[4813]: I1202 10:36:35.554996 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerStarted","Data":"503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec"} Dec 02 10:36:35 crc kubenswrapper[4813]: I1202 10:36:35.555033 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerStarted","Data":"838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047"} Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.079330 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368ae054-4a5a-4187-a078-e1db70e84741" path="/var/lib/kubelet/pods/368ae054-4a5a-4187-a078-e1db70e84741/volumes" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.123318 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.129284 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.209742 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.331631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4k9l\" (UniqueName: \"kubernetes.io/projected/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-kube-api-access-h4k9l\") pod \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\" (UID: \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\") " Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.331756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21b686f-4092-4e67-9883-a7489038286c-operator-scripts\") pod \"d21b686f-4092-4e67-9883-a7489038286c\" (UID: \"d21b686f-4092-4e67-9883-a7489038286c\") " Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.331807 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwpmw\" (UniqueName: \"kubernetes.io/projected/d21b686f-4092-4e67-9883-a7489038286c-kube-api-access-jwpmw\") pod \"d21b686f-4092-4e67-9883-a7489038286c\" (UID: \"d21b686f-4092-4e67-9883-a7489038286c\") " Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.331840 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjkwz\" (UniqueName: \"kubernetes.io/projected/f2f2e281-93f1-4265-9590-d90489b8fb83-kube-api-access-tjkwz\") pod \"f2f2e281-93f1-4265-9590-d90489b8fb83\" (UID: \"f2f2e281-93f1-4265-9590-d90489b8fb83\") " Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.331913 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-operator-scripts\") pod \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\" (UID: \"8cc7289e-4a0c-4273-9d08-89b05ea88cb2\") " Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.331946 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f2e281-93f1-4265-9590-d90489b8fb83-operator-scripts\") pod \"f2f2e281-93f1-4265-9590-d90489b8fb83\" (UID: \"f2f2e281-93f1-4265-9590-d90489b8fb83\") " Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.332757 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2f2e281-93f1-4265-9590-d90489b8fb83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2f2e281-93f1-4265-9590-d90489b8fb83" (UID: "f2f2e281-93f1-4265-9590-d90489b8fb83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.336287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8cc7289e-4a0c-4273-9d08-89b05ea88cb2" (UID: "8cc7289e-4a0c-4273-9d08-89b05ea88cb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.336854 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d21b686f-4092-4e67-9883-a7489038286c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d21b686f-4092-4e67-9883-a7489038286c" (UID: "d21b686f-4092-4e67-9883-a7489038286c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.343297 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-kube-api-access-h4k9l" (OuterVolumeSpecName: "kube-api-access-h4k9l") pod "8cc7289e-4a0c-4273-9d08-89b05ea88cb2" (UID: "8cc7289e-4a0c-4273-9d08-89b05ea88cb2"). InnerVolumeSpecName "kube-api-access-h4k9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.361511 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f2e281-93f1-4265-9590-d90489b8fb83-kube-api-access-tjkwz" (OuterVolumeSpecName: "kube-api-access-tjkwz") pod "f2f2e281-93f1-4265-9590-d90489b8fb83" (UID: "f2f2e281-93f1-4265-9590-d90489b8fb83"). InnerVolumeSpecName "kube-api-access-tjkwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.364257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21b686f-4092-4e67-9883-a7489038286c-kube-api-access-jwpmw" (OuterVolumeSpecName: "kube-api-access-jwpmw") pod "d21b686f-4092-4e67-9883-a7489038286c" (UID: "d21b686f-4092-4e67-9883-a7489038286c"). InnerVolumeSpecName "kube-api-access-jwpmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.433577 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.433613 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2f2e281-93f1-4265-9590-d90489b8fb83-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.433623 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4k9l\" (UniqueName: \"kubernetes.io/projected/8cc7289e-4a0c-4273-9d08-89b05ea88cb2-kube-api-access-h4k9l\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.433632 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21b686f-4092-4e67-9883-a7489038286c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.433641 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwpmw\" (UniqueName: \"kubernetes.io/projected/d21b686f-4092-4e67-9883-a7489038286c-kube-api-access-jwpmw\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.433649 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjkwz\" (UniqueName: \"kubernetes.io/projected/f2f2e281-93f1-4265-9590-d90489b8fb83-kube-api-access-tjkwz\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.571128 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnt25" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.575192 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnt25" event={"ID":"f2f2e281-93f1-4265-9590-d90489b8fb83","Type":"ContainerDied","Data":"30070979127cb85dd5897b4e8166a6fdcbb903c2441f1a4853c3a9c7a6ebb6c3"} Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.575251 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30070979127cb85dd5897b4e8166a6fdcbb903c2441f1a4853c3a9c7a6ebb6c3" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.577458 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fec7-account-create-update-ltbqz" event={"ID":"8cc7289e-4a0c-4273-9d08-89b05ea88cb2","Type":"ContainerDied","Data":"43fbef09173dc0816f4f1563aec32aa76b179e8e977d8f9a98fad68f2d54ff2e"} Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.577510 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43fbef09173dc0816f4f1563aec32aa76b179e8e977d8f9a98fad68f2d54ff2e" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.577575 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fec7-account-create-update-ltbqz" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.579991 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-67g6c" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.581244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-67g6c" event={"ID":"d21b686f-4092-4e67-9883-a7489038286c","Type":"ContainerDied","Data":"ccb216a91f81ce75718aac1ea4066c320d114178c48383b52f418ebda1e647e9"} Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.581275 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccb216a91f81ce75718aac1ea4066c320d114178c48383b52f418ebda1e647e9" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.977750 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:36 crc kubenswrapper[4813]: I1202 10:36:36.989287 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.000969 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.041273 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8315d930-9aff-4394-9711-042ed1e4de69-operator-scripts\") pod \"8315d930-9aff-4394-9711-042ed1e4de69\" (UID: \"8315d930-9aff-4394-9711-042ed1e4de69\") " Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.041385 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40614701-94ba-4e26-bd54-c5f04422c5fa-operator-scripts\") pod \"40614701-94ba-4e26-bd54-c5f04422c5fa\" (UID: \"40614701-94ba-4e26-bd54-c5f04422c5fa\") " Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.041436 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbm86\" (UniqueName: \"kubernetes.io/projected/40614701-94ba-4e26-bd54-c5f04422c5fa-kube-api-access-vbm86\") pod \"40614701-94ba-4e26-bd54-c5f04422c5fa\" (UID: \"40614701-94ba-4e26-bd54-c5f04422c5fa\") " Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.041466 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-operator-scripts\") pod \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\" (UID: \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\") " Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.041498 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjbpm\" (UniqueName: \"kubernetes.io/projected/8315d930-9aff-4394-9711-042ed1e4de69-kube-api-access-xjbpm\") pod \"8315d930-9aff-4394-9711-042ed1e4de69\" (UID: \"8315d930-9aff-4394-9711-042ed1e4de69\") " Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.041530 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmd4s\" (UniqueName: \"kubernetes.io/projected/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-kube-api-access-lmd4s\") pod \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\" (UID: \"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b\") " Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.042442 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f60b319e-ff22-4ef4-a0b7-02f50ef06c3b" (UID: "f60b319e-ff22-4ef4-a0b7-02f50ef06c3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.046532 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40614701-94ba-4e26-bd54-c5f04422c5fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40614701-94ba-4e26-bd54-c5f04422c5fa" (UID: "40614701-94ba-4e26-bd54-c5f04422c5fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.047210 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8315d930-9aff-4394-9711-042ed1e4de69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8315d930-9aff-4394-9711-042ed1e4de69" (UID: "8315d930-9aff-4394-9711-042ed1e4de69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.049347 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8315d930-9aff-4394-9711-042ed1e4de69-kube-api-access-xjbpm" (OuterVolumeSpecName: "kube-api-access-xjbpm") pod "8315d930-9aff-4394-9711-042ed1e4de69" (UID: "8315d930-9aff-4394-9711-042ed1e4de69"). InnerVolumeSpecName "kube-api-access-xjbpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.049484 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-kube-api-access-lmd4s" (OuterVolumeSpecName: "kube-api-access-lmd4s") pod "f60b319e-ff22-4ef4-a0b7-02f50ef06c3b" (UID: "f60b319e-ff22-4ef4-a0b7-02f50ef06c3b"). InnerVolumeSpecName "kube-api-access-lmd4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.049656 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40614701-94ba-4e26-bd54-c5f04422c5fa-kube-api-access-vbm86" (OuterVolumeSpecName: "kube-api-access-vbm86") pod "40614701-94ba-4e26-bd54-c5f04422c5fa" (UID: "40614701-94ba-4e26-bd54-c5f04422c5fa"). InnerVolumeSpecName "kube-api-access-vbm86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.144104 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40614701-94ba-4e26-bd54-c5f04422c5fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.144141 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbm86\" (UniqueName: \"kubernetes.io/projected/40614701-94ba-4e26-bd54-c5f04422c5fa-kube-api-access-vbm86\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.144156 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.144169 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjbpm\" (UniqueName: \"kubernetes.io/projected/8315d930-9aff-4394-9711-042ed1e4de69-kube-api-access-xjbpm\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.144179 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmd4s\" (UniqueName: \"kubernetes.io/projected/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b-kube-api-access-lmd4s\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.144190 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8315d930-9aff-4394-9711-042ed1e4de69-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.588300 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-whrfm" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.588293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-whrfm" event={"ID":"40614701-94ba-4e26-bd54-c5f04422c5fa","Type":"ContainerDied","Data":"40e2fdb5aca2b4ee2680b4db40b921527b4974a0a318d1efcb0471cbcbaa4c05"} Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.588679 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40e2fdb5aca2b4ee2680b4db40b921527b4974a0a318d1efcb0471cbcbaa4c05" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.589694 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0305-account-create-update-gck59" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.589705 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0305-account-create-update-gck59" event={"ID":"f60b319e-ff22-4ef4-a0b7-02f50ef06c3b","Type":"ContainerDied","Data":"565ad63329da3646f175cb67bfe8c4abdab840dae2bc5455379015f63b747956"} Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.589843 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="565ad63329da3646f175cb67bfe8c4abdab840dae2bc5455379015f63b747956" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.590874 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" event={"ID":"8315d930-9aff-4394-9711-042ed1e4de69","Type":"ContainerDied","Data":"35f750af950398f3deedf7347a0b812e49272f53e8d66db8798047046ae3e5a4"} Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.590910 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f750af950398f3deedf7347a0b812e49272f53e8d66db8798047046ae3e5a4" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.590960 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d58b-account-create-update-v9r6z" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.608776 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerStarted","Data":"67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623"} Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.609104 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.609131 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="ceilometer-notification-agent" containerID="cri-o://838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047" gracePeriod=30 Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.609146 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="proxy-httpd" containerID="cri-o://67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623" gracePeriod=30 Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.609196 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="sg-core" containerID="cri-o://503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec" gracePeriod=30 Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.609100 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="ceilometer-central-agent" containerID="cri-o://ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754" gracePeriod=30 Dec 02 10:36:37 crc kubenswrapper[4813]: I1202 10:36:37.638214 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.837190487 podStartE2EDuration="11.638194144s" podCreationTimestamp="2025-12-02 10:36:26 +0000 UTC" firstStartedPulling="2025-12-02 10:36:27.30380037 +0000 UTC m=+1711.498974672" lastFinishedPulling="2025-12-02 10:36:37.104804027 +0000 UTC m=+1721.299978329" observedRunningTime="2025-12-02 10:36:37.635478837 +0000 UTC m=+1721.830653149" watchObservedRunningTime="2025-12-02 10:36:37.638194144 +0000 UTC m=+1721.833368446" Dec 02 10:36:38 crc kubenswrapper[4813]: I1202 10:36:38.621985 4813 generic.go:334] "Generic (PLEG): container finished" podID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerID="67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623" exitCode=0 Dec 02 10:36:38 crc kubenswrapper[4813]: I1202 10:36:38.622345 4813 generic.go:334] "Generic (PLEG): container finished" podID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerID="503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec" exitCode=2 Dec 02 10:36:38 crc kubenswrapper[4813]: I1202 10:36:38.622053 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerDied","Data":"67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623"} Dec 02 10:36:38 crc kubenswrapper[4813]: I1202 10:36:38.622390 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerDied","Data":"503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec"} Dec 02 10:36:38 crc kubenswrapper[4813]: I1202 10:36:38.622402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerDied","Data":"838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047"} Dec 02 10:36:38 crc kubenswrapper[4813]: I1202 10:36:38.622357 4813 generic.go:334] "Generic (PLEG): container finished" podID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerID="838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047" exitCode=0 Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.349534 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mhjsh"] Dec 02 10:36:41 crc kubenswrapper[4813]: E1202 10:36:41.350286 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f2e281-93f1-4265-9590-d90489b8fb83" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350301 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f2e281-93f1-4265-9590-d90489b8fb83" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: E1202 10:36:41.350313 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8315d930-9aff-4394-9711-042ed1e4de69" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350321 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8315d930-9aff-4394-9711-042ed1e4de69" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: E1202 10:36:41.350335 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc7289e-4a0c-4273-9d08-89b05ea88cb2" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350342 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc7289e-4a0c-4273-9d08-89b05ea88cb2" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: E1202 10:36:41.350374 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60b319e-ff22-4ef4-a0b7-02f50ef06c3b" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350382 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60b319e-ff22-4ef4-a0b7-02f50ef06c3b" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: E1202 10:36:41.350395 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21b686f-4092-4e67-9883-a7489038286c" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350402 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21b686f-4092-4e67-9883-a7489038286c" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: E1202 10:36:41.350417 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368ae054-4a5a-4187-a078-e1db70e84741" containerName="neutron-api" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350426 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="368ae054-4a5a-4187-a078-e1db70e84741" containerName="neutron-api" Dec 02 10:36:41 crc kubenswrapper[4813]: E1202 10:36:41.350437 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368ae054-4a5a-4187-a078-e1db70e84741" containerName="neutron-httpd" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350444 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="368ae054-4a5a-4187-a078-e1db70e84741" containerName="neutron-httpd" Dec 02 10:36:41 crc kubenswrapper[4813]: E1202 10:36:41.350464 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40614701-94ba-4e26-bd54-c5f04422c5fa" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350473 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="40614701-94ba-4e26-bd54-c5f04422c5fa" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350693 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f2e281-93f1-4265-9590-d90489b8fb83" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350711 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21b686f-4092-4e67-9883-a7489038286c" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350727 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="368ae054-4a5a-4187-a078-e1db70e84741" containerName="neutron-httpd" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350740 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc7289e-4a0c-4273-9d08-89b05ea88cb2" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350752 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8315d930-9aff-4394-9711-042ed1e4de69" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350763 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="368ae054-4a5a-4187-a078-e1db70e84741" containerName="neutron-api" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350776 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60b319e-ff22-4ef4-a0b7-02f50ef06c3b" containerName="mariadb-account-create-update" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.350789 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="40614701-94ba-4e26-bd54-c5f04422c5fa" containerName="mariadb-database-create" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.351498 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.353693 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8z5df" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.354552 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.357110 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.359971 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mhjsh"] Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.533353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.533750 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-config-data\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.533783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-scripts\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.533847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpf7\" (UniqueName: \"kubernetes.io/projected/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-kube-api-access-djpf7\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.635459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djpf7\" (UniqueName: \"kubernetes.io/projected/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-kube-api-access-djpf7\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.635828 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.636026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-config-data\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.636203 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-scripts\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.641965 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-scripts\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.643047 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-config-data\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.644503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.653285 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djpf7\" (UniqueName: \"kubernetes.io/projected/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-kube-api-access-djpf7\") pod \"nova-cell0-conductor-db-sync-mhjsh\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:41 crc kubenswrapper[4813]: I1202 10:36:41.711597 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:36:42 crc kubenswrapper[4813]: I1202 10:36:42.122969 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mhjsh"] Dec 02 10:36:42 crc kubenswrapper[4813]: I1202 10:36:42.657739 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" event={"ID":"3cc60b93-1ed9-43f7-9aa7-20d830692b2a","Type":"ContainerStarted","Data":"63b231d60e47c9a0199ee3d464194187a6293a12350e8e32e673d76847a7b6e1"} Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.115991 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.268466 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-config-data\") pod \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.268508 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-run-httpd\") pod \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.268552 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-248rk\" (UniqueName: \"kubernetes.io/projected/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-kube-api-access-248rk\") pod \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.268618 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-combined-ca-bundle\") pod \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.268672 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-sg-core-conf-yaml\") pod \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.268688 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-scripts\") pod \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.268760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-log-httpd\") pod \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\" (UID: \"3106ec2d-c61c-4632-97d7-c1fd2056a1ac\") " Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.269384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3106ec2d-c61c-4632-97d7-c1fd2056a1ac" (UID: "3106ec2d-c61c-4632-97d7-c1fd2056a1ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.269636 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3106ec2d-c61c-4632-97d7-c1fd2056a1ac" (UID: "3106ec2d-c61c-4632-97d7-c1fd2056a1ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.273917 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-kube-api-access-248rk" (OuterVolumeSpecName: "kube-api-access-248rk") pod "3106ec2d-c61c-4632-97d7-c1fd2056a1ac" (UID: "3106ec2d-c61c-4632-97d7-c1fd2056a1ac"). InnerVolumeSpecName "kube-api-access-248rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.277476 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-scripts" (OuterVolumeSpecName: "scripts") pod "3106ec2d-c61c-4632-97d7-c1fd2056a1ac" (UID: "3106ec2d-c61c-4632-97d7-c1fd2056a1ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.307473 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3106ec2d-c61c-4632-97d7-c1fd2056a1ac" (UID: "3106ec2d-c61c-4632-97d7-c1fd2056a1ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.344690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3106ec2d-c61c-4632-97d7-c1fd2056a1ac" (UID: "3106ec2d-c61c-4632-97d7-c1fd2056a1ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.370429 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.370467 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.370477 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-248rk\" (UniqueName: \"kubernetes.io/projected/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-kube-api-access-248rk\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.370487 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.370497 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.370507 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.376585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-config-data" (OuterVolumeSpecName: "config-data") pod "3106ec2d-c61c-4632-97d7-c1fd2056a1ac" (UID: "3106ec2d-c61c-4632-97d7-c1fd2056a1ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.472213 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106ec2d-c61c-4632-97d7-c1fd2056a1ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.676543 4813 generic.go:334] "Generic (PLEG): container finished" podID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerID="ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754" exitCode=0 Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.678308 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerDied","Data":"ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754"} Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.678419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3106ec2d-c61c-4632-97d7-c1fd2056a1ac","Type":"ContainerDied","Data":"db634ad238334e7de4da047e7708fe597e33b50fa7f1ad1e1ac1be016c07a9b8"} Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.678499 4813 scope.go:117] "RemoveContainer" containerID="67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.678731 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.722588 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.736518 4813 scope.go:117] "RemoveContainer" containerID="503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.743702 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.753518 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:43 crc kubenswrapper[4813]: E1202 10:36:43.753984 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="ceilometer-notification-agent" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.754010 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="ceilometer-notification-agent" Dec 02 10:36:43 crc kubenswrapper[4813]: E1202 10:36:43.754034 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="ceilometer-central-agent" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.754041 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="ceilometer-central-agent" Dec 02 10:36:43 crc kubenswrapper[4813]: E1202 10:36:43.754052 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="proxy-httpd" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.754059 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="proxy-httpd" Dec 02 10:36:43 crc kubenswrapper[4813]: E1202 10:36:43.754091 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="sg-core" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.754097 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="sg-core" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.754270 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="proxy-httpd" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.754283 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="sg-core" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.754300 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="ceilometer-central-agent" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.754313 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" containerName="ceilometer-notification-agent" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.755760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.758008 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.764562 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.765056 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.792288 4813 scope.go:117] "RemoveContainer" containerID="838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.824446 4813 scope.go:117] "RemoveContainer" containerID="ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.855692 4813 scope.go:117] "RemoveContainer" containerID="67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623" Dec 02 10:36:43 crc kubenswrapper[4813]: E1202 10:36:43.856162 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623\": container with ID starting with 67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623 not found: ID does not exist" containerID="67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.856208 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623"} err="failed to get container status \"67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623\": rpc error: code = NotFound desc = could not find container \"67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623\": container with ID starting with 67de340f308854138318a666e49dc901aa30742bd1ade12efdb91fd1c1a24623 not found: ID does not exist" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.856237 4813 scope.go:117] "RemoveContainer" containerID="503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec" Dec 02 10:36:43 crc kubenswrapper[4813]: E1202 10:36:43.857035 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec\": container with ID starting with 503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec not found: ID does not exist" containerID="503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.857062 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec"} err="failed to get container status \"503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec\": rpc error: code = NotFound desc = could not find container \"503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec\": container with ID starting with 503fce284e806532dd32fbca26bb03255740763392edb7fb86445bf3232831ec not found: ID does not exist" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.857095 4813 scope.go:117] "RemoveContainer" containerID="838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047" Dec 02 10:36:43 crc kubenswrapper[4813]: E1202 10:36:43.857743 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047\": container with ID starting with 838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047 not found: ID does not exist" containerID="838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.857768 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047"} err="failed to get container status \"838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047\": rpc error: code = NotFound desc = could not find container \"838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047\": container with ID starting with 838198996c16de5c50c376dd6180098dae12b4b61845751bccde1769ce721047 not found: ID does not exist" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.857783 4813 scope.go:117] "RemoveContainer" containerID="ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754" Dec 02 10:36:43 crc kubenswrapper[4813]: E1202 10:36:43.858239 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754\": container with ID starting with ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754 not found: ID does not exist" containerID="ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.858263 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754"} err="failed to get container status \"ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754\": rpc error: code = NotFound desc = could not find container \"ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754\": container with ID starting with ad36c9c3b2df4ac139ca77b67a7f831caf4dca44cbde5bbecfd81dac427a9754 not found: ID does not exist" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.878962 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-log-httpd\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.879047 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-scripts\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.879134 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggsg8\" (UniqueName: \"kubernetes.io/projected/671ef6a3-0800-4be6-96e9-69eb85dba9e0-kube-api-access-ggsg8\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.879197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.879320 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-run-httpd\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.879387 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-config-data\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.879477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-run-httpd\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981112 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-config-data\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981160 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-log-httpd\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-scripts\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981251 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsg8\" (UniqueName: \"kubernetes.io/projected/671ef6a3-0800-4be6-96e9-69eb85dba9e0-kube-api-access-ggsg8\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981281 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981640 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-run-httpd\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.981780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-log-httpd\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.986924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.988187 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-config-data\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:43 crc kubenswrapper[4813]: I1202 10:36:43.998800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:44 crc kubenswrapper[4813]: I1202 10:36:44.001466 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsg8\" (UniqueName: \"kubernetes.io/projected/671ef6a3-0800-4be6-96e9-69eb85dba9e0-kube-api-access-ggsg8\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:44 crc kubenswrapper[4813]: I1202 10:36:44.005784 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-scripts\") pod \"ceilometer-0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " pod="openstack/ceilometer-0" Dec 02 10:36:44 crc kubenswrapper[4813]: I1202 10:36:44.077728 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3106ec2d-c61c-4632-97d7-c1fd2056a1ac" path="/var/lib/kubelet/pods/3106ec2d-c61c-4632-97d7-c1fd2056a1ac/volumes" Dec 02 10:36:44 crc kubenswrapper[4813]: I1202 10:36:44.096564 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:44 crc kubenswrapper[4813]: I1202 10:36:44.523104 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:44 crc kubenswrapper[4813]: I1202 10:36:44.561900 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:44 crc kubenswrapper[4813]: W1202 10:36:44.563173 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod671ef6a3_0800_4be6_96e9_69eb85dba9e0.slice/crio-16d6559e258fa28997c3fd38574b7b0d1b5005f681bd0c0b3ab658607ccd6ef2 WatchSource:0}: Error finding container 16d6559e258fa28997c3fd38574b7b0d1b5005f681bd0c0b3ab658607ccd6ef2: Status 404 returned error can't find the container with id 16d6559e258fa28997c3fd38574b7b0d1b5005f681bd0c0b3ab658607ccd6ef2 Dec 02 10:36:44 crc kubenswrapper[4813]: I1202 10:36:44.693053 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerStarted","Data":"16d6559e258fa28997c3fd38574b7b0d1b5005f681bd0c0b3ab658607ccd6ef2"} Dec 02 10:36:47 crc kubenswrapper[4813]: I1202 10:36:47.070949 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:36:47 crc kubenswrapper[4813]: E1202 10:36:47.071511 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:36:50 crc kubenswrapper[4813]: I1202 10:36:50.777941 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerStarted","Data":"2647b5c41bb8a83bd8fd0c374ebb62cf86c5b7ce3f0798b1e0663fc45859bf38"} Dec 02 10:36:50 crc kubenswrapper[4813]: I1202 10:36:50.780916 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" event={"ID":"3cc60b93-1ed9-43f7-9aa7-20d830692b2a","Type":"ContainerStarted","Data":"8d5be43539082a95ae82bd8f287905847e1d01aa3b955e6e23d75d32c207367c"} Dec 02 10:36:50 crc kubenswrapper[4813]: I1202 10:36:50.798269 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" podStartSLOduration=1.7339294490000001 podStartE2EDuration="9.798245821s" podCreationTimestamp="2025-12-02 10:36:41 +0000 UTC" firstStartedPulling="2025-12-02 10:36:42.095578801 +0000 UTC m=+1726.290753113" lastFinishedPulling="2025-12-02 10:36:50.159895183 +0000 UTC m=+1734.355069485" observedRunningTime="2025-12-02 10:36:50.795212584 +0000 UTC m=+1734.990386906" watchObservedRunningTime="2025-12-02 10:36:50.798245821 +0000 UTC m=+1734.993420123" Dec 02 10:36:51 crc kubenswrapper[4813]: I1202 10:36:51.793466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerStarted","Data":"9c31468b7828c88e278a4dbbd40b3d076809cf3b7d16a864e41208ae05db59a8"} Dec 02 10:36:54 crc kubenswrapper[4813]: I1202 10:36:54.836645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerStarted","Data":"90ca16acef5bc4b0d2ebb1d933c57c623c814a84fbc9133904f1fcf65e761346"} Dec 02 10:36:56 crc kubenswrapper[4813]: I1202 10:36:56.856109 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerStarted","Data":"eb725420c3957a4074d8d8731a3e5378a9b8ffe0cce0491346d0cf2410d4aeb3"} Dec 02 10:36:56 crc kubenswrapper[4813]: I1202 10:36:56.856653 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:36:56 crc kubenswrapper[4813]: I1202 10:36:56.856356 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="ceilometer-notification-agent" containerID="cri-o://9c31468b7828c88e278a4dbbd40b3d076809cf3b7d16a864e41208ae05db59a8" gracePeriod=30 Dec 02 10:36:56 crc kubenswrapper[4813]: I1202 10:36:56.856228 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="ceilometer-central-agent" containerID="cri-o://2647b5c41bb8a83bd8fd0c374ebb62cf86c5b7ce3f0798b1e0663fc45859bf38" gracePeriod=30 Dec 02 10:36:56 crc kubenswrapper[4813]: I1202 10:36:56.856391 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="proxy-httpd" containerID="cri-o://eb725420c3957a4074d8d8731a3e5378a9b8ffe0cce0491346d0cf2410d4aeb3" gracePeriod=30 Dec 02 10:36:56 crc kubenswrapper[4813]: I1202 10:36:56.856408 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="sg-core" containerID="cri-o://90ca16acef5bc4b0d2ebb1d933c57c623c814a84fbc9133904f1fcf65e761346" gracePeriod=30 Dec 02 10:36:56 crc kubenswrapper[4813]: I1202 10:36:56.883775 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3942266229999998 podStartE2EDuration="13.883758618s" podCreationTimestamp="2025-12-02 10:36:43 +0000 UTC" firstStartedPulling="2025-12-02 10:36:44.566127883 +0000 UTC m=+1728.761302185" lastFinishedPulling="2025-12-02 10:36:56.055659878 +0000 UTC m=+1740.250834180" observedRunningTime="2025-12-02 10:36:56.878712844 +0000 UTC m=+1741.073887146" watchObservedRunningTime="2025-12-02 10:36:56.883758618 +0000 UTC m=+1741.078932920" Dec 02 10:36:57 crc kubenswrapper[4813]: I1202 10:36:57.867931 4813 generic.go:334] "Generic (PLEG): container finished" podID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerID="eb725420c3957a4074d8d8731a3e5378a9b8ffe0cce0491346d0cf2410d4aeb3" exitCode=0 Dec 02 10:36:57 crc kubenswrapper[4813]: I1202 10:36:57.868313 4813 generic.go:334] "Generic (PLEG): container finished" podID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerID="90ca16acef5bc4b0d2ebb1d933c57c623c814a84fbc9133904f1fcf65e761346" exitCode=2 Dec 02 10:36:57 crc kubenswrapper[4813]: I1202 10:36:57.868328 4813 generic.go:334] "Generic (PLEG): container finished" podID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerID="9c31468b7828c88e278a4dbbd40b3d076809cf3b7d16a864e41208ae05db59a8" exitCode=0 Dec 02 10:36:57 crc kubenswrapper[4813]: I1202 10:36:57.868148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerDied","Data":"eb725420c3957a4074d8d8731a3e5378a9b8ffe0cce0491346d0cf2410d4aeb3"} Dec 02 10:36:57 crc kubenswrapper[4813]: I1202 10:36:57.868402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerDied","Data":"90ca16acef5bc4b0d2ebb1d933c57c623c814a84fbc9133904f1fcf65e761346"} Dec 02 10:36:57 crc kubenswrapper[4813]: I1202 10:36:57.868339 4813 generic.go:334] "Generic (PLEG): container finished" podID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerID="2647b5c41bb8a83bd8fd0c374ebb62cf86c5b7ce3f0798b1e0663fc45859bf38" exitCode=0 Dec 02 10:36:57 crc kubenswrapper[4813]: I1202 10:36:57.868425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerDied","Data":"9c31468b7828c88e278a4dbbd40b3d076809cf3b7d16a864e41208ae05db59a8"} Dec 02 10:36:57 crc kubenswrapper[4813]: I1202 10:36:57.868442 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerDied","Data":"2647b5c41bb8a83bd8fd0c374ebb62cf86c5b7ce3f0798b1e0663fc45859bf38"} Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.105061 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.239292 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-sg-core-conf-yaml\") pod \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.239360 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-combined-ca-bundle\") pod \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.239408 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggsg8\" (UniqueName: \"kubernetes.io/projected/671ef6a3-0800-4be6-96e9-69eb85dba9e0-kube-api-access-ggsg8\") pod \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.239494 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-scripts\") pod \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.239578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-log-httpd\") pod \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.239643 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-config-data\") pod \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.239669 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-run-httpd\") pod \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\" (UID: \"671ef6a3-0800-4be6-96e9-69eb85dba9e0\") " Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.246324 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "671ef6a3-0800-4be6-96e9-69eb85dba9e0" (UID: "671ef6a3-0800-4be6-96e9-69eb85dba9e0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.249457 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671ef6a3-0800-4be6-96e9-69eb85dba9e0-kube-api-access-ggsg8" (OuterVolumeSpecName: "kube-api-access-ggsg8") pod "671ef6a3-0800-4be6-96e9-69eb85dba9e0" (UID: "671ef6a3-0800-4be6-96e9-69eb85dba9e0"). InnerVolumeSpecName "kube-api-access-ggsg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.253352 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "671ef6a3-0800-4be6-96e9-69eb85dba9e0" (UID: "671ef6a3-0800-4be6-96e9-69eb85dba9e0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.264505 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-scripts" (OuterVolumeSpecName: "scripts") pod "671ef6a3-0800-4be6-96e9-69eb85dba9e0" (UID: "671ef6a3-0800-4be6-96e9-69eb85dba9e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.307117 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "671ef6a3-0800-4be6-96e9-69eb85dba9e0" (UID: "671ef6a3-0800-4be6-96e9-69eb85dba9e0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.331497 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "671ef6a3-0800-4be6-96e9-69eb85dba9e0" (UID: "671ef6a3-0800-4be6-96e9-69eb85dba9e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.342131 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.342168 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.342179 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.342192 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggsg8\" (UniqueName: \"kubernetes.io/projected/671ef6a3-0800-4be6-96e9-69eb85dba9e0-kube-api-access-ggsg8\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.342201 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.342208 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/671ef6a3-0800-4be6-96e9-69eb85dba9e0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.349774 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-config-data" (OuterVolumeSpecName: "config-data") pod "671ef6a3-0800-4be6-96e9-69eb85dba9e0" (UID: "671ef6a3-0800-4be6-96e9-69eb85dba9e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.443625 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671ef6a3-0800-4be6-96e9-69eb85dba9e0-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.878488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"671ef6a3-0800-4be6-96e9-69eb85dba9e0","Type":"ContainerDied","Data":"16d6559e258fa28997c3fd38574b7b0d1b5005f681bd0c0b3ab658607ccd6ef2"} Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.878547 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.878791 4813 scope.go:117] "RemoveContainer" containerID="eb725420c3957a4074d8d8731a3e5378a9b8ffe0cce0491346d0cf2410d4aeb3" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.905239 4813 scope.go:117] "RemoveContainer" containerID="90ca16acef5bc4b0d2ebb1d933c57c623c814a84fbc9133904f1fcf65e761346" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.916180 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.928971 4813 scope.go:117] "RemoveContainer" containerID="9c31468b7828c88e278a4dbbd40b3d076809cf3b7d16a864e41208ae05db59a8" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.930190 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.939472 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:58 crc kubenswrapper[4813]: E1202 10:36:58.939994 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="ceilometer-central-agent" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.940192 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="ceilometer-central-agent" Dec 02 10:36:58 crc kubenswrapper[4813]: E1202 10:36:58.940320 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="sg-core" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.940404 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="sg-core" Dec 02 10:36:58 crc kubenswrapper[4813]: E1202 10:36:58.940462 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="proxy-httpd" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.940510 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="proxy-httpd" Dec 02 10:36:58 crc kubenswrapper[4813]: E1202 10:36:58.940581 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="ceilometer-notification-agent" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.940630 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="ceilometer-notification-agent" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.940854 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="sg-core" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.940925 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="ceilometer-notification-agent" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.940980 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="ceilometer-central-agent" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.941039 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" containerName="proxy-httpd" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.942635 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.947105 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.948738 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.949550 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:36:58 crc kubenswrapper[4813]: I1202 10:36:58.969376 4813 scope.go:117] "RemoveContainer" containerID="2647b5c41bb8a83bd8fd0c374ebb62cf86c5b7ce3f0798b1e0663fc45859bf38" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.053331 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6trcr\" (UniqueName: \"kubernetes.io/projected/3493a790-ccfa-47e0-8182-437b5581f397-kube-api-access-6trcr\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.053403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-log-httpd\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.053450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-scripts\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.053477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.053549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-run-httpd\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.053576 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.053600 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-config-data\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.067589 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:36:59 crc kubenswrapper[4813]: E1202 10:36:59.068014 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.155402 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6trcr\" (UniqueName: \"kubernetes.io/projected/3493a790-ccfa-47e0-8182-437b5581f397-kube-api-access-6trcr\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.155478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-log-httpd\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.155536 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-scripts\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.155555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.155598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-run-httpd\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.155618 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.155634 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-config-data\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.156106 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-run-httpd\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.156271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-log-httpd\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.159391 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.159591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.160300 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-scripts\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.160585 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-config-data\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.172923 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6trcr\" (UniqueName: \"kubernetes.io/projected/3493a790-ccfa-47e0-8182-437b5581f397-kube-api-access-6trcr\") pod \"ceilometer-0\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.284886 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.728210 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:36:59 crc kubenswrapper[4813]: I1202 10:36:59.894665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerStarted","Data":"5db2e05eac308b46ccb507811fea5042b2c2d2ffbeb09b337de85fcb64306063"} Dec 02 10:37:00 crc kubenswrapper[4813]: I1202 10:37:00.078825 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="671ef6a3-0800-4be6-96e9-69eb85dba9e0" path="/var/lib/kubelet/pods/671ef6a3-0800-4be6-96e9-69eb85dba9e0/volumes" Dec 02 10:37:00 crc kubenswrapper[4813]: I1202 10:37:00.909624 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerStarted","Data":"3fad0cac112aee8d41a6bfd78fa8f4dbfbcf4c3b10c39def39753c7097beb40b"} Dec 02 10:37:01 crc kubenswrapper[4813]: I1202 10:37:01.921791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerStarted","Data":"352245dda1befdbff2327d23561882a11168682f6ff17a8cdb17ccd3fd57a5f5"} Dec 02 10:37:02 crc kubenswrapper[4813]: I1202 10:37:02.933219 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerStarted","Data":"99836f517af35f23412f53998f161b3665c656ea7393fe39e78980cfaf5ee1a9"} Dec 02 10:37:02 crc kubenswrapper[4813]: I1202 10:37:02.935582 4813 generic.go:334] "Generic (PLEG): container finished" podID="3cc60b93-1ed9-43f7-9aa7-20d830692b2a" containerID="8d5be43539082a95ae82bd8f287905847e1d01aa3b955e6e23d75d32c207367c" exitCode=0 Dec 02 10:37:02 crc kubenswrapper[4813]: I1202 10:37:02.935626 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" event={"ID":"3cc60b93-1ed9-43f7-9aa7-20d830692b2a","Type":"ContainerDied","Data":"8d5be43539082a95ae82bd8f287905847e1d01aa3b955e6e23d75d32c207367c"} Dec 02 10:37:03 crc kubenswrapper[4813]: I1202 10:37:03.946546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerStarted","Data":"b9c63629fa4e90ecd6d0093f2eac2a674c37660ac0152088cdba2bacb4cabe99"} Dec 02 10:37:03 crc kubenswrapper[4813]: I1202 10:37:03.946895 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:37:03 crc kubenswrapper[4813]: I1202 10:37:03.975265 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.435739704 podStartE2EDuration="5.975243262s" podCreationTimestamp="2025-12-02 10:36:58 +0000 UTC" firstStartedPulling="2025-12-02 10:36:59.737376181 +0000 UTC m=+1743.932550483" lastFinishedPulling="2025-12-02 10:37:03.276879729 +0000 UTC m=+1747.472054041" observedRunningTime="2025-12-02 10:37:03.965127133 +0000 UTC m=+1748.160301445" watchObservedRunningTime="2025-12-02 10:37:03.975243262 +0000 UTC m=+1748.170417564" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.284528 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.346495 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-scripts\") pod \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.346813 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-combined-ca-bundle\") pod \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.347022 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-config-data\") pod \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.347087 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djpf7\" (UniqueName: \"kubernetes.io/projected/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-kube-api-access-djpf7\") pod \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\" (UID: \"3cc60b93-1ed9-43f7-9aa7-20d830692b2a\") " Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.352252 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-kube-api-access-djpf7" (OuterVolumeSpecName: "kube-api-access-djpf7") pod "3cc60b93-1ed9-43f7-9aa7-20d830692b2a" (UID: "3cc60b93-1ed9-43f7-9aa7-20d830692b2a"). InnerVolumeSpecName "kube-api-access-djpf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.352682 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-scripts" (OuterVolumeSpecName: "scripts") pod "3cc60b93-1ed9-43f7-9aa7-20d830692b2a" (UID: "3cc60b93-1ed9-43f7-9aa7-20d830692b2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.371660 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cc60b93-1ed9-43f7-9aa7-20d830692b2a" (UID: "3cc60b93-1ed9-43f7-9aa7-20d830692b2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.372045 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-config-data" (OuterVolumeSpecName: "config-data") pod "3cc60b93-1ed9-43f7-9aa7-20d830692b2a" (UID: "3cc60b93-1ed9-43f7-9aa7-20d830692b2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.449582 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.449813 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.449928 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.450003 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djpf7\" (UniqueName: \"kubernetes.io/projected/3cc60b93-1ed9-43f7-9aa7-20d830692b2a-kube-api-access-djpf7\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.958341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" event={"ID":"3cc60b93-1ed9-43f7-9aa7-20d830692b2a","Type":"ContainerDied","Data":"63b231d60e47c9a0199ee3d464194187a6293a12350e8e32e673d76847a7b6e1"} Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.958392 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b231d60e47c9a0199ee3d464194187a6293a12350e8e32e673d76847a7b6e1" Dec 02 10:37:04 crc kubenswrapper[4813]: I1202 10:37:04.959614 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mhjsh" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.054218 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 10:37:05 crc kubenswrapper[4813]: E1202 10:37:05.054594 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc60b93-1ed9-43f7-9aa7-20d830692b2a" containerName="nova-cell0-conductor-db-sync" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.054616 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc60b93-1ed9-43f7-9aa7-20d830692b2a" containerName="nova-cell0-conductor-db-sync" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.054784 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc60b93-1ed9-43f7-9aa7-20d830692b2a" containerName="nova-cell0-conductor-db-sync" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.055387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.057252 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8z5df" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.057455 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.066315 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.162302 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e2709a9-5caf-4939-b835-4ecbf7d9e865-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.162451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l8dw\" (UniqueName: \"kubernetes.io/projected/3e2709a9-5caf-4939-b835-4ecbf7d9e865-kube-api-access-2l8dw\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.162492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2709a9-5caf-4939-b835-4ecbf7d9e865-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.264477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2709a9-5caf-4939-b835-4ecbf7d9e865-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.264547 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e2709a9-5caf-4939-b835-4ecbf7d9e865-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.264641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l8dw\" (UniqueName: \"kubernetes.io/projected/3e2709a9-5caf-4939-b835-4ecbf7d9e865-kube-api-access-2l8dw\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.268713 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2709a9-5caf-4939-b835-4ecbf7d9e865-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.278492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e2709a9-5caf-4939-b835-4ecbf7d9e865-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.283405 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l8dw\" (UniqueName: \"kubernetes.io/projected/3e2709a9-5caf-4939-b835-4ecbf7d9e865-kube-api-access-2l8dw\") pod \"nova-cell0-conductor-0\" (UID: \"3e2709a9-5caf-4939-b835-4ecbf7d9e865\") " pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.374111 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.814228 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 10:37:05 crc kubenswrapper[4813]: I1202 10:37:05.966659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e2709a9-5caf-4939-b835-4ecbf7d9e865","Type":"ContainerStarted","Data":"a1b6fe0ade9771ad2ddc3dcd1569c9027ace77e6eee4282c3de89279a1a690f4"} Dec 02 10:37:06 crc kubenswrapper[4813]: I1202 10:37:06.998044 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3e2709a9-5caf-4939-b835-4ecbf7d9e865","Type":"ContainerStarted","Data":"96085551a137df1c5ff8e6334b486ccf4ff6749ce84ccc65323ea80138823001"} Dec 02 10:37:06 crc kubenswrapper[4813]: I1202 10:37:06.998395 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:07 crc kubenswrapper[4813]: I1202 10:37:07.019537 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.019518405 podStartE2EDuration="2.019518405s" podCreationTimestamp="2025-12-02 10:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:07.015052527 +0000 UTC m=+1751.210226839" watchObservedRunningTime="2025-12-02 10:37:07.019518405 +0000 UTC m=+1751.214692707" Dec 02 10:37:12 crc kubenswrapper[4813]: I1202 10:37:12.067889 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:37:12 crc kubenswrapper[4813]: E1202 10:37:12.068674 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.422893 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.861198 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bc7vv"] Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.862583 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.864483 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.865004 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.884499 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bc7vv"] Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.963562 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-scripts\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.963905 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nztg\" (UniqueName: \"kubernetes.io/projected/02700b55-38cc-4aeb-b1a2-3e0820835639-kube-api-access-5nztg\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.964046 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:15 crc kubenswrapper[4813]: I1202 10:37:15.964191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-config-data\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.034190 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.035607 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.038305 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.049064 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.065743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.065808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nztg\" (UniqueName: \"kubernetes.io/projected/02700b55-38cc-4aeb-b1a2-3e0820835639-kube-api-access-5nztg\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.065850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-config-data\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.066000 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.066161 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-config-data\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.066226 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clw6d\" (UniqueName: \"kubernetes.io/projected/bbe1934f-df77-4479-ba6b-183b6fa08d29-kube-api-access-clw6d\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.066276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-scripts\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.076931 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-scripts\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.079175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.084250 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-config-data\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.096645 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nztg\" (UniqueName: \"kubernetes.io/projected/02700b55-38cc-4aeb-b1a2-3e0820835639-kube-api-access-5nztg\") pod \"nova-cell0-cell-mapping-bc7vv\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.142835 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.145245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.156872 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.163714 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.168049 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.168123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c4d29c-2fb2-430a-a07d-f8b060caf264-logs\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.168175 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-config-data\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.168240 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.168285 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jn4v\" (UniqueName: \"kubernetes.io/projected/d9c4d29c-2fb2-430a-a07d-f8b060caf264-kube-api-access-7jn4v\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.168306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clw6d\" (UniqueName: \"kubernetes.io/projected/bbe1934f-df77-4479-ba6b-183b6fa08d29-kube-api-access-clw6d\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.168324 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-config-data\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.173020 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-config-data\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.174684 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.179831 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.181247 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.184806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.185335 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.200683 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.209025 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clw6d\" (UniqueName: \"kubernetes.io/projected/bbe1934f-df77-4479-ba6b-183b6fa08d29-kube-api-access-clw6d\") pod \"nova-scheduler-0\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.289020 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.289124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qds2l\" (UniqueName: \"kubernetes.io/projected/24cba461-894a-4ffb-84d8-33f354e4b9d7-kube-api-access-qds2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.289255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.289334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jn4v\" (UniqueName: \"kubernetes.io/projected/d9c4d29c-2fb2-430a-a07d-f8b060caf264-kube-api-access-7jn4v\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.289391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-config-data\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.289448 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.289651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c4d29c-2fb2-430a-a07d-f8b060caf264-logs\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.290329 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c4d29c-2fb2-430a-a07d-f8b060caf264-logs\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.310242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-config-data\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.324728 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.352518 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.358339 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jn4v\" (UniqueName: \"kubernetes.io/projected/d9c4d29c-2fb2-430a-a07d-f8b060caf264-kube-api-access-7jn4v\") pod \"nova-api-0\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.361993 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.375712 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.377548 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.380820 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.395369 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.395485 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.395511 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qds2l\" (UniqueName: \"kubernetes.io/projected/24cba461-894a-4ffb-84d8-33f354e4b9d7-kube-api-access-qds2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.402235 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.403662 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.416357 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.436633 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qds2l\" (UniqueName: \"kubernetes.io/projected/24cba461-894a-4ffb-84d8-33f354e4b9d7-kube-api-access-qds2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.439929 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-57n8s"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.441806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.450372 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-57n8s"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508443 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-config\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508544 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508593 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v98q\" (UniqueName: \"kubernetes.io/projected/cd99ac45-880d-4a79-ae06-0b9da996f21e-kube-api-access-7v98q\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508634 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-config-data\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508719 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-dns-svc\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508762 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508797 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz4hf\" (UniqueName: \"kubernetes.io/projected/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-kube-api-access-bz4hf\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.508838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd99ac45-880d-4a79-ae06-0b9da996f21e-logs\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611143 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd99ac45-880d-4a79-ae06-0b9da996f21e-logs\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-config\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611547 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611586 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v98q\" (UniqueName: \"kubernetes.io/projected/cd99ac45-880d-4a79-ae06-0b9da996f21e-kube-api-access-7v98q\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-config-data\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611696 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd99ac45-880d-4a79-ae06-0b9da996f21e-logs\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-dns-svc\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611776 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.611811 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz4hf\" (UniqueName: \"kubernetes.io/projected/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-kube-api-access-bz4hf\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.616431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-config-data\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.617810 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-config\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.617950 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-dns-svc\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.618499 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.618840 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.618853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.631439 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz4hf\" (UniqueName: \"kubernetes.io/projected/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-kube-api-access-bz4hf\") pod \"dnsmasq-dns-566b5b7845-57n8s\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.635675 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v98q\" (UniqueName: \"kubernetes.io/projected/cd99ac45-880d-4a79-ae06-0b9da996f21e-kube-api-access-7v98q\") pod \"nova-metadata-0\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.676913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.703098 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.778208 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.843385 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bc7vv"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.919204 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9g8dx"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.920791 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.927415 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.928361 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.935974 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9g8dx"] Dec 02 10:37:16 crc kubenswrapper[4813]: I1202 10:37:16.967542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.022360 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-scripts\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.022476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qbv\" (UniqueName: \"kubernetes.io/projected/5eb91674-4da6-449f-a496-894a0210963b-kube-api-access-p7qbv\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.022503 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-config-data\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.022563 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.046454 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:17 crc kubenswrapper[4813]: W1202 10:37:17.052335 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c4d29c_2fb2_430a_a07d_f8b060caf264.slice/crio-3c991cf332b110f009796a6c59430e1cef7c6d539416a1ebfa195998fd55b4e3 WatchSource:0}: Error finding container 3c991cf332b110f009796a6c59430e1cef7c6d539416a1ebfa195998fd55b4e3: Status 404 returned error can't find the container with id 3c991cf332b110f009796a6c59430e1cef7c6d539416a1ebfa195998fd55b4e3 Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.109798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d9c4d29c-2fb2-430a-a07d-f8b060caf264","Type":"ContainerStarted","Data":"3c991cf332b110f009796a6c59430e1cef7c6d539416a1ebfa195998fd55b4e3"} Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.115614 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bbe1934f-df77-4479-ba6b-183b6fa08d29","Type":"ContainerStarted","Data":"3ae85e42ac5fd56e36f1b87edc0458b83c1c28df69a951e57b82f54c2699a140"} Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.118433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bc7vv" event={"ID":"02700b55-38cc-4aeb-b1a2-3e0820835639","Type":"ContainerStarted","Data":"e6fc530501f67b1890d617f34a32b40dfdcedcfc95e66b4748bcd7fdff1735a2"} Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.123725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.123797 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-scripts\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.123923 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7qbv\" (UniqueName: \"kubernetes.io/projected/5eb91674-4da6-449f-a496-894a0210963b-kube-api-access-p7qbv\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.123966 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-config-data\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.128615 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-scripts\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.129034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-config-data\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.131103 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.149803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7qbv\" (UniqueName: \"kubernetes.io/projected/5eb91674-4da6-449f-a496-894a0210963b-kube-api-access-p7qbv\") pod \"nova-cell1-conductor-db-sync-9g8dx\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.181831 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.254499 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.273881 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.362435 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-57n8s"] Dec 02 10:37:17 crc kubenswrapper[4813]: W1202 10:37:17.365421 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd1b4ae_380f_4b1a_9ead_df3407a2814d.slice/crio-a30a547382cd43c2da260d0833b025459e4c2cc68b40ee8fbb9b5f1cebc210e1 WatchSource:0}: Error finding container a30a547382cd43c2da260d0833b025459e4c2cc68b40ee8fbb9b5f1cebc210e1: Status 404 returned error can't find the container with id a30a547382cd43c2da260d0833b025459e4c2cc68b40ee8fbb9b5f1cebc210e1 Dec 02 10:37:17 crc kubenswrapper[4813]: W1202 10:37:17.692935 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eb91674_4da6_449f_a496_894a0210963b.slice/crio-515f9665862e2657d6c37906a2f0a710342aef80da3ea43a760d052205c8390e WatchSource:0}: Error finding container 515f9665862e2657d6c37906a2f0a710342aef80da3ea43a760d052205c8390e: Status 404 returned error can't find the container with id 515f9665862e2657d6c37906a2f0a710342aef80da3ea43a760d052205c8390e Dec 02 10:37:17 crc kubenswrapper[4813]: I1202 10:37:17.695621 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9g8dx"] Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.152567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" event={"ID":"5eb91674-4da6-449f-a496-894a0210963b","Type":"ContainerStarted","Data":"4935a0a006ad6c988b8fde8d27c4ac0537508353b4cd4dcf95a60a47b82beddb"} Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.153119 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" event={"ID":"5eb91674-4da6-449f-a496-894a0210963b","Type":"ContainerStarted","Data":"515f9665862e2657d6c37906a2f0a710342aef80da3ea43a760d052205c8390e"} Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.157875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24cba461-894a-4ffb-84d8-33f354e4b9d7","Type":"ContainerStarted","Data":"19d3e13b2ea9ccb01773411d865f09e4c4f723a880b6a071049fe41656e0d12a"} Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.163332 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd99ac45-880d-4a79-ae06-0b9da996f21e","Type":"ContainerStarted","Data":"c81837f0993a2af7a5065417733a385421594f125c244170e063002bca5ad5bb"} Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.169643 4813 generic.go:334] "Generic (PLEG): container finished" podID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" containerID="926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b" exitCode=0 Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.169742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" event={"ID":"7fd1b4ae-380f-4b1a-9ead-df3407a2814d","Type":"ContainerDied","Data":"926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b"} Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.169878 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" event={"ID":"7fd1b4ae-380f-4b1a-9ead-df3407a2814d","Type":"ContainerStarted","Data":"a30a547382cd43c2da260d0833b025459e4c2cc68b40ee8fbb9b5f1cebc210e1"} Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.174440 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bc7vv" event={"ID":"02700b55-38cc-4aeb-b1a2-3e0820835639","Type":"ContainerStarted","Data":"38dcc9fbc5e7c99a2648f507a2af606d847d7b30a6f97b4183abe57a0231dd5f"} Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.175790 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" podStartSLOduration=2.175770131 podStartE2EDuration="2.175770131s" podCreationTimestamp="2025-12-02 10:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:18.169558463 +0000 UTC m=+1762.364732765" watchObservedRunningTime="2025-12-02 10:37:18.175770131 +0000 UTC m=+1762.370944433" Dec 02 10:37:18 crc kubenswrapper[4813]: I1202 10:37:18.227977 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bc7vv" podStartSLOduration=3.22750682 podStartE2EDuration="3.22750682s" podCreationTimestamp="2025-12-02 10:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:18.222943409 +0000 UTC m=+1762.418117741" watchObservedRunningTime="2025-12-02 10:37:18.22750682 +0000 UTC m=+1762.422681122" Dec 02 10:37:19 crc kubenswrapper[4813]: I1202 10:37:19.194487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" event={"ID":"7fd1b4ae-380f-4b1a-9ead-df3407a2814d","Type":"ContainerStarted","Data":"f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02"} Dec 02 10:37:19 crc kubenswrapper[4813]: I1202 10:37:19.213009 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" podStartSLOduration=3.212990981 podStartE2EDuration="3.212990981s" podCreationTimestamp="2025-12-02 10:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:19.211138378 +0000 UTC m=+1763.406312680" watchObservedRunningTime="2025-12-02 10:37:19.212990981 +0000 UTC m=+1763.408165283" Dec 02 10:37:19 crc kubenswrapper[4813]: I1202 10:37:19.952849 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:19 crc kubenswrapper[4813]: I1202 10:37:19.964164 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:20 crc kubenswrapper[4813]: I1202 10:37:20.201801 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.211767 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24cba461-894a-4ffb-84d8-33f354e4b9d7","Type":"ContainerStarted","Data":"cdc6ec14e0027af0f1e5fd08bdde3d9f9867690bb37cb120647529077ae9c3a4"} Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.212040 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="24cba461-894a-4ffb-84d8-33f354e4b9d7" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://cdc6ec14e0027af0f1e5fd08bdde3d9f9867690bb37cb120647529077ae9c3a4" gracePeriod=30 Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.214378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd99ac45-880d-4a79-ae06-0b9da996f21e","Type":"ContainerStarted","Data":"27d0462ab69a51663cc9a03a107b69b87bc0cc30dbe28a8fe161df16efdd9bb4"} Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.216920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bbe1934f-df77-4479-ba6b-183b6fa08d29","Type":"ContainerStarted","Data":"f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334"} Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.223646 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d9c4d29c-2fb2-430a-a07d-f8b060caf264","Type":"ContainerStarted","Data":"95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497"} Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.233790 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.684730305 podStartE2EDuration="5.233752746s" podCreationTimestamp="2025-12-02 10:37:16 +0000 UTC" firstStartedPulling="2025-12-02 10:37:17.18500046 +0000 UTC m=+1761.380174762" lastFinishedPulling="2025-12-02 10:37:20.734022901 +0000 UTC m=+1764.929197203" observedRunningTime="2025-12-02 10:37:21.230654397 +0000 UTC m=+1765.425828739" watchObservedRunningTime="2025-12-02 10:37:21.233752746 +0000 UTC m=+1765.428927058" Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.260124 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.5007736870000001 podStartE2EDuration="5.260101829s" podCreationTimestamp="2025-12-02 10:37:16 +0000 UTC" firstStartedPulling="2025-12-02 10:37:16.97545504 +0000 UTC m=+1761.170629342" lastFinishedPulling="2025-12-02 10:37:20.734783182 +0000 UTC m=+1764.929957484" observedRunningTime="2025-12-02 10:37:21.253668625 +0000 UTC m=+1765.448842937" watchObservedRunningTime="2025-12-02 10:37:21.260101829 +0000 UTC m=+1765.455276131" Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.353198 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 10:37:21 crc kubenswrapper[4813]: I1202 10:37:21.678463 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:22 crc kubenswrapper[4813]: I1202 10:37:22.232215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd99ac45-880d-4a79-ae06-0b9da996f21e","Type":"ContainerStarted","Data":"97fa8f73017c583f9c9c2f7aebda76da5696c5b83dc30e505025a1bf267ae4b0"} Dec 02 10:37:22 crc kubenswrapper[4813]: I1202 10:37:22.232389 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerName="nova-metadata-log" containerID="cri-o://27d0462ab69a51663cc9a03a107b69b87bc0cc30dbe28a8fe161df16efdd9bb4" gracePeriod=30 Dec 02 10:37:22 crc kubenswrapper[4813]: I1202 10:37:22.233740 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerName="nova-metadata-metadata" containerID="cri-o://97fa8f73017c583f9c9c2f7aebda76da5696c5b83dc30e505025a1bf267ae4b0" gracePeriod=30 Dec 02 10:37:22 crc kubenswrapper[4813]: I1202 10:37:22.236277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d9c4d29c-2fb2-430a-a07d-f8b060caf264","Type":"ContainerStarted","Data":"a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2"} Dec 02 10:37:22 crc kubenswrapper[4813]: I1202 10:37:22.257400 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.807270703 podStartE2EDuration="6.257361695s" podCreationTimestamp="2025-12-02 10:37:16 +0000 UTC" firstStartedPulling="2025-12-02 10:37:17.283960769 +0000 UTC m=+1761.479135071" lastFinishedPulling="2025-12-02 10:37:20.734051771 +0000 UTC m=+1764.929226063" observedRunningTime="2025-12-02 10:37:22.252505967 +0000 UTC m=+1766.447680269" watchObservedRunningTime="2025-12-02 10:37:22.257361695 +0000 UTC m=+1766.452535997" Dec 02 10:37:22 crc kubenswrapper[4813]: I1202 10:37:22.273034 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.590485676 podStartE2EDuration="6.273021093s" podCreationTimestamp="2025-12-02 10:37:16 +0000 UTC" firstStartedPulling="2025-12-02 10:37:17.059368819 +0000 UTC m=+1761.254543121" lastFinishedPulling="2025-12-02 10:37:20.741904236 +0000 UTC m=+1764.937078538" observedRunningTime="2025-12-02 10:37:22.268058981 +0000 UTC m=+1766.463233283" watchObservedRunningTime="2025-12-02 10:37:22.273021093 +0000 UTC m=+1766.468195395" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.247024 4813 generic.go:334] "Generic (PLEG): container finished" podID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerID="97fa8f73017c583f9c9c2f7aebda76da5696c5b83dc30e505025a1bf267ae4b0" exitCode=0 Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.247539 4813 generic.go:334] "Generic (PLEG): container finished" podID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerID="27d0462ab69a51663cc9a03a107b69b87bc0cc30dbe28a8fe161df16efdd9bb4" exitCode=143 Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.247144 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd99ac45-880d-4a79-ae06-0b9da996f21e","Type":"ContainerDied","Data":"97fa8f73017c583f9c9c2f7aebda76da5696c5b83dc30e505025a1bf267ae4b0"} Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.247596 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd99ac45-880d-4a79-ae06-0b9da996f21e","Type":"ContainerDied","Data":"27d0462ab69a51663cc9a03a107b69b87bc0cc30dbe28a8fe161df16efdd9bb4"} Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.247616 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd99ac45-880d-4a79-ae06-0b9da996f21e","Type":"ContainerDied","Data":"c81837f0993a2af7a5065417733a385421594f125c244170e063002bca5ad5bb"} Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.247630 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81837f0993a2af7a5065417733a385421594f125c244170e063002bca5ad5bb" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.281792 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.371522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v98q\" (UniqueName: \"kubernetes.io/projected/cd99ac45-880d-4a79-ae06-0b9da996f21e-kube-api-access-7v98q\") pod \"cd99ac45-880d-4a79-ae06-0b9da996f21e\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.371760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-combined-ca-bundle\") pod \"cd99ac45-880d-4a79-ae06-0b9da996f21e\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.371842 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd99ac45-880d-4a79-ae06-0b9da996f21e-logs\") pod \"cd99ac45-880d-4a79-ae06-0b9da996f21e\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.372010 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-config-data\") pod \"cd99ac45-880d-4a79-ae06-0b9da996f21e\" (UID: \"cd99ac45-880d-4a79-ae06-0b9da996f21e\") " Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.372411 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd99ac45-880d-4a79-ae06-0b9da996f21e-logs" (OuterVolumeSpecName: "logs") pod "cd99ac45-880d-4a79-ae06-0b9da996f21e" (UID: "cd99ac45-880d-4a79-ae06-0b9da996f21e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.372843 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd99ac45-880d-4a79-ae06-0b9da996f21e-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.378880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd99ac45-880d-4a79-ae06-0b9da996f21e-kube-api-access-7v98q" (OuterVolumeSpecName: "kube-api-access-7v98q") pod "cd99ac45-880d-4a79-ae06-0b9da996f21e" (UID: "cd99ac45-880d-4a79-ae06-0b9da996f21e"). InnerVolumeSpecName "kube-api-access-7v98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.403950 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-config-data" (OuterVolumeSpecName: "config-data") pod "cd99ac45-880d-4a79-ae06-0b9da996f21e" (UID: "cd99ac45-880d-4a79-ae06-0b9da996f21e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.404887 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd99ac45-880d-4a79-ae06-0b9da996f21e" (UID: "cd99ac45-880d-4a79-ae06-0b9da996f21e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.474451 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.474494 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v98q\" (UniqueName: \"kubernetes.io/projected/cd99ac45-880d-4a79-ae06-0b9da996f21e-kube-api-access-7v98q\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:23 crc kubenswrapper[4813]: I1202 10:37:23.474508 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd99ac45-880d-4a79-ae06-0b9da996f21e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.255408 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.286472 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.301951 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.315221 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:24 crc kubenswrapper[4813]: E1202 10:37:24.315886 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerName="nova-metadata-metadata" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.315914 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerName="nova-metadata-metadata" Dec 02 10:37:24 crc kubenswrapper[4813]: E1202 10:37:24.315981 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerName="nova-metadata-log" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.315996 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerName="nova-metadata-log" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.316397 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerName="nova-metadata-log" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.316418 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" containerName="nova-metadata-metadata" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.318187 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.321659 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.321843 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.325520 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.404691 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz5f9\" (UniqueName: \"kubernetes.io/projected/ceb78558-044a-4416-9ab6-089ef939bbef-kube-api-access-pz5f9\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.404751 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb78558-044a-4416-9ab6-089ef939bbef-logs\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.405133 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.405265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-config-data\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.405439 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.507639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.507698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-config-data\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.507769 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.507806 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz5f9\" (UniqueName: \"kubernetes.io/projected/ceb78558-044a-4416-9ab6-089ef939bbef-kube-api-access-pz5f9\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.507827 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb78558-044a-4416-9ab6-089ef939bbef-logs\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.508276 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb78558-044a-4416-9ab6-089ef939bbef-logs\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.514107 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-config-data\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.515677 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.524842 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.525671 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz5f9\" (UniqueName: \"kubernetes.io/projected/ceb78558-044a-4416-9ab6-089ef939bbef-kube-api-access-pz5f9\") pod \"nova-metadata-0\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " pod="openstack/nova-metadata-0" Dec 02 10:37:24 crc kubenswrapper[4813]: I1202 10:37:24.646582 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:25 crc kubenswrapper[4813]: I1202 10:37:25.068985 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:37:25 crc kubenswrapper[4813]: E1202 10:37:25.070030 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:37:25 crc kubenswrapper[4813]: I1202 10:37:25.102394 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:25 crc kubenswrapper[4813]: I1202 10:37:25.268586 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceb78558-044a-4416-9ab6-089ef939bbef","Type":"ContainerStarted","Data":"897e584659eb5dcaf03d8591d3a3dc1860b737d4004a8f70f8c173a069ecbee4"} Dec 02 10:37:25 crc kubenswrapper[4813]: I1202 10:37:25.271829 4813 generic.go:334] "Generic (PLEG): container finished" podID="02700b55-38cc-4aeb-b1a2-3e0820835639" containerID="38dcc9fbc5e7c99a2648f507a2af606d847d7b30a6f97b4183abe57a0231dd5f" exitCode=0 Dec 02 10:37:25 crc kubenswrapper[4813]: I1202 10:37:25.271922 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bc7vv" event={"ID":"02700b55-38cc-4aeb-b1a2-3e0820835639","Type":"ContainerDied","Data":"38dcc9fbc5e7c99a2648f507a2af606d847d7b30a6f97b4183abe57a0231dd5f"} Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.080920 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd99ac45-880d-4a79-ae06-0b9da996f21e" path="/var/lib/kubelet/pods/cd99ac45-880d-4a79-ae06-0b9da996f21e/volumes" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.282731 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceb78558-044a-4416-9ab6-089ef939bbef","Type":"ContainerStarted","Data":"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651"} Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.282785 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceb78558-044a-4416-9ab6-089ef939bbef","Type":"ContainerStarted","Data":"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96"} Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.285468 4813 generic.go:334] "Generic (PLEG): container finished" podID="5eb91674-4da6-449f-a496-894a0210963b" containerID="4935a0a006ad6c988b8fde8d27c4ac0537508353b4cd4dcf95a60a47b82beddb" exitCode=0 Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.285484 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" event={"ID":"5eb91674-4da6-449f-a496-894a0210963b","Type":"ContainerDied","Data":"4935a0a006ad6c988b8fde8d27c4ac0537508353b4cd4dcf95a60a47b82beddb"} Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.304437 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.304415813 podStartE2EDuration="2.304415813s" podCreationTimestamp="2025-12-02 10:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:26.298468203 +0000 UTC m=+1770.493642505" watchObservedRunningTime="2025-12-02 10:37:26.304415813 +0000 UTC m=+1770.499590115" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.353670 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.363579 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.363628 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.391905 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.648952 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.747039 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nztg\" (UniqueName: \"kubernetes.io/projected/02700b55-38cc-4aeb-b1a2-3e0820835639-kube-api-access-5nztg\") pod \"02700b55-38cc-4aeb-b1a2-3e0820835639\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.747163 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-config-data\") pod \"02700b55-38cc-4aeb-b1a2-3e0820835639\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.747198 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-scripts\") pod \"02700b55-38cc-4aeb-b1a2-3e0820835639\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.747334 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-combined-ca-bundle\") pod \"02700b55-38cc-4aeb-b1a2-3e0820835639\" (UID: \"02700b55-38cc-4aeb-b1a2-3e0820835639\") " Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.752102 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-scripts" (OuterVolumeSpecName: "scripts") pod "02700b55-38cc-4aeb-b1a2-3e0820835639" (UID: "02700b55-38cc-4aeb-b1a2-3e0820835639"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.754808 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02700b55-38cc-4aeb-b1a2-3e0820835639-kube-api-access-5nztg" (OuterVolumeSpecName: "kube-api-access-5nztg") pod "02700b55-38cc-4aeb-b1a2-3e0820835639" (UID: "02700b55-38cc-4aeb-b1a2-3e0820835639"). InnerVolumeSpecName "kube-api-access-5nztg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.779440 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.783994 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-config-data" (OuterVolumeSpecName: "config-data") pod "02700b55-38cc-4aeb-b1a2-3e0820835639" (UID: "02700b55-38cc-4aeb-b1a2-3e0820835639"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.792869 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02700b55-38cc-4aeb-b1a2-3e0820835639" (UID: "02700b55-38cc-4aeb-b1a2-3e0820835639"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.848099 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-khmf6"] Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.848442 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" podUID="df3c50a8-d38e-4243-9c68-3c8713072e3e" containerName="dnsmasq-dns" containerID="cri-o://70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a" gracePeriod=10 Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.849350 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.849390 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nztg\" (UniqueName: \"kubernetes.io/projected/02700b55-38cc-4aeb-b1a2-3e0820835639-kube-api-access-5nztg\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.849405 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:26 crc kubenswrapper[4813]: I1202 10:37:26.849416 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02700b55-38cc-4aeb-b1a2-3e0820835639-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.253991 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.300761 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bc7vv" event={"ID":"02700b55-38cc-4aeb-b1a2-3e0820835639","Type":"ContainerDied","Data":"e6fc530501f67b1890d617f34a32b40dfdcedcfc95e66b4748bcd7fdff1735a2"} Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.300809 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6fc530501f67b1890d617f34a32b40dfdcedcfc95e66b4748bcd7fdff1735a2" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.300991 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bc7vv" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.314197 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.314114 4813 generic.go:334] "Generic (PLEG): container finished" podID="df3c50a8-d38e-4243-9c68-3c8713072e3e" containerID="70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a" exitCode=0 Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.314548 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" event={"ID":"df3c50a8-d38e-4243-9c68-3c8713072e3e","Type":"ContainerDied","Data":"70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a"} Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.314618 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-khmf6" event={"ID":"df3c50a8-d38e-4243-9c68-3c8713072e3e","Type":"ContainerDied","Data":"629c695ee63a8e72b13dbd872a9d0a7e10ddaa1c65689ae4a70a39543e4491ba"} Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.314643 4813 scope.go:117] "RemoveContainer" containerID="70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.343153 4813 scope.go:117] "RemoveContainer" containerID="39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.354309 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.357499 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-sb\") pod \"df3c50a8-d38e-4243-9c68-3c8713072e3e\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.357596 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgkdk\" (UniqueName: \"kubernetes.io/projected/df3c50a8-d38e-4243-9c68-3c8713072e3e-kube-api-access-qgkdk\") pod \"df3c50a8-d38e-4243-9c68-3c8713072e3e\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.357624 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-dns-svc\") pod \"df3c50a8-d38e-4243-9c68-3c8713072e3e\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.357660 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-config\") pod \"df3c50a8-d38e-4243-9c68-3c8713072e3e\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.357681 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-nb\") pod \"df3c50a8-d38e-4243-9c68-3c8713072e3e\" (UID: \"df3c50a8-d38e-4243-9c68-3c8713072e3e\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.371556 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3c50a8-d38e-4243-9c68-3c8713072e3e-kube-api-access-qgkdk" (OuterVolumeSpecName: "kube-api-access-qgkdk") pod "df3c50a8-d38e-4243-9c68-3c8713072e3e" (UID: "df3c50a8-d38e-4243-9c68-3c8713072e3e"). InnerVolumeSpecName "kube-api-access-qgkdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.390561 4813 scope.go:117] "RemoveContainer" containerID="70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a" Dec 02 10:37:27 crc kubenswrapper[4813]: E1202 10:37:27.391596 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a\": container with ID starting with 70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a not found: ID does not exist" containerID="70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.391642 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a"} err="failed to get container status \"70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a\": rpc error: code = NotFound desc = could not find container \"70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a\": container with ID starting with 70480d8a4f00854da73310315901216a6ae532b05839341d85f28e499f96a96a not found: ID does not exist" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.391669 4813 scope.go:117] "RemoveContainer" containerID="39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2" Dec 02 10:37:27 crc kubenswrapper[4813]: E1202 10:37:27.395949 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2\": container with ID starting with 39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2 not found: ID does not exist" containerID="39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.396090 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2"} err="failed to get container status \"39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2\": rpc error: code = NotFound desc = could not find container \"39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2\": container with ID starting with 39fd1de6a63da3cfb235f110fe5a84d52d5a045ba64b4b2844a1b6985d91ffb2 not found: ID does not exist" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.404299 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.419834 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df3c50a8-d38e-4243-9c68-3c8713072e3e" (UID: "df3c50a8-d38e-4243-9c68-3c8713072e3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.435209 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-config" (OuterVolumeSpecName: "config") pod "df3c50a8-d38e-4243-9c68-3c8713072e3e" (UID: "df3c50a8-d38e-4243-9c68-3c8713072e3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.435574 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df3c50a8-d38e-4243-9c68-3c8713072e3e" (UID: "df3c50a8-d38e-4243-9c68-3c8713072e3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.442311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df3c50a8-d38e-4243-9c68-3c8713072e3e" (UID: "df3c50a8-d38e-4243-9c68-3c8713072e3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.445243 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.458921 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.459160 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-log" containerID="cri-o://95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497" gracePeriod=30 Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.459404 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgkdk\" (UniqueName: \"kubernetes.io/projected/df3c50a8-d38e-4243-9c68-3c8713072e3e-kube-api-access-qgkdk\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.459451 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.459463 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.459476 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.459486 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df3c50a8-d38e-4243-9c68-3c8713072e3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.459576 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-api" containerID="cri-o://a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2" gracePeriod=30 Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.494389 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.708697 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.721848 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-khmf6"] Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.742963 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-khmf6"] Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.764988 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-combined-ca-bundle\") pod \"5eb91674-4da6-449f-a496-894a0210963b\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.765244 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-config-data\") pod \"5eb91674-4da6-449f-a496-894a0210963b\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.765439 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7qbv\" (UniqueName: \"kubernetes.io/projected/5eb91674-4da6-449f-a496-894a0210963b-kube-api-access-p7qbv\") pod \"5eb91674-4da6-449f-a496-894a0210963b\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.765608 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-scripts\") pod \"5eb91674-4da6-449f-a496-894a0210963b\" (UID: \"5eb91674-4da6-449f-a496-894a0210963b\") " Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.769554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb91674-4da6-449f-a496-894a0210963b-kube-api-access-p7qbv" (OuterVolumeSpecName: "kube-api-access-p7qbv") pod "5eb91674-4da6-449f-a496-894a0210963b" (UID: "5eb91674-4da6-449f-a496-894a0210963b"). InnerVolumeSpecName "kube-api-access-p7qbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.770919 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-scripts" (OuterVolumeSpecName: "scripts") pod "5eb91674-4da6-449f-a496-894a0210963b" (UID: "5eb91674-4da6-449f-a496-894a0210963b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.791883 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-config-data" (OuterVolumeSpecName: "config-data") pod "5eb91674-4da6-449f-a496-894a0210963b" (UID: "5eb91674-4da6-449f-a496-894a0210963b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.802661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5eb91674-4da6-449f-a496-894a0210963b" (UID: "5eb91674-4da6-449f-a496-894a0210963b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.830398 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.869151 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.869185 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.869194 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7qbv\" (UniqueName: \"kubernetes.io/projected/5eb91674-4da6-449f-a496-894a0210963b-kube-api-access-p7qbv\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:27 crc kubenswrapper[4813]: I1202 10:37:27.869204 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb91674-4da6-449f-a496-894a0210963b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.079327 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3c50a8-d38e-4243-9c68-3c8713072e3e" path="/var/lib/kubelet/pods/df3c50a8-d38e-4243-9c68-3c8713072e3e/volumes" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.324257 4813 generic.go:334] "Generic (PLEG): container finished" podID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerID="95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497" exitCode=143 Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.324314 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d9c4d29c-2fb2-430a-a07d-f8b060caf264","Type":"ContainerDied","Data":"95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497"} Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.327199 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" event={"ID":"5eb91674-4da6-449f-a496-894a0210963b","Type":"ContainerDied","Data":"515f9665862e2657d6c37906a2f0a710342aef80da3ea43a760d052205c8390e"} Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.327233 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="515f9665862e2657d6c37906a2f0a710342aef80da3ea43a760d052205c8390e" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.327309 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9g8dx" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.329699 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" containerName="nova-metadata-log" containerID="cri-o://eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96" gracePeriod=30 Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.330957 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" containerName="nova-metadata-metadata" containerID="cri-o://620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651" gracePeriod=30 Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.398862 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 10:37:28 crc kubenswrapper[4813]: E1202 10:37:28.399347 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02700b55-38cc-4aeb-b1a2-3e0820835639" containerName="nova-manage" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.399372 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="02700b55-38cc-4aeb-b1a2-3e0820835639" containerName="nova-manage" Dec 02 10:37:28 crc kubenswrapper[4813]: E1202 10:37:28.399391 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3c50a8-d38e-4243-9c68-3c8713072e3e" containerName="dnsmasq-dns" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.399398 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3c50a8-d38e-4243-9c68-3c8713072e3e" containerName="dnsmasq-dns" Dec 02 10:37:28 crc kubenswrapper[4813]: E1202 10:37:28.399414 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3c50a8-d38e-4243-9c68-3c8713072e3e" containerName="init" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.399421 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3c50a8-d38e-4243-9c68-3c8713072e3e" containerName="init" Dec 02 10:37:28 crc kubenswrapper[4813]: E1202 10:37:28.399441 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eb91674-4da6-449f-a496-894a0210963b" containerName="nova-cell1-conductor-db-sync" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.399447 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eb91674-4da6-449f-a496-894a0210963b" containerName="nova-cell1-conductor-db-sync" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.399625 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eb91674-4da6-449f-a496-894a0210963b" containerName="nova-cell1-conductor-db-sync" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.399642 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="02700b55-38cc-4aeb-b1a2-3e0820835639" containerName="nova-manage" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.399653 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3c50a8-d38e-4243-9c68-3c8713072e3e" containerName="dnsmasq-dns" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.400547 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.403126 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.412249 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.480701 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfcj\" (UniqueName: \"kubernetes.io/projected/6ef24984-82f3-4b10-997c-3051d8a59c5f-kube-api-access-mcfcj\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.480757 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef24984-82f3-4b10-997c-3051d8a59c5f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.480942 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef24984-82f3-4b10-997c-3051d8a59c5f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.582874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcfcj\" (UniqueName: \"kubernetes.io/projected/6ef24984-82f3-4b10-997c-3051d8a59c5f-kube-api-access-mcfcj\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.582937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef24984-82f3-4b10-997c-3051d8a59c5f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.583948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef24984-82f3-4b10-997c-3051d8a59c5f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.587321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ef24984-82f3-4b10-997c-3051d8a59c5f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.587602 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ef24984-82f3-4b10-997c-3051d8a59c5f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.603257 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcfcj\" (UniqueName: \"kubernetes.io/projected/6ef24984-82f3-4b10-997c-3051d8a59c5f-kube-api-access-mcfcj\") pod \"nova-cell1-conductor-0\" (UID: \"6ef24984-82f3-4b10-997c-3051d8a59c5f\") " pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.733179 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.864216 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.989855 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb78558-044a-4416-9ab6-089ef939bbef-logs\") pod \"ceb78558-044a-4416-9ab6-089ef939bbef\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.990140 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-combined-ca-bundle\") pod \"ceb78558-044a-4416-9ab6-089ef939bbef\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.990240 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-nova-metadata-tls-certs\") pod \"ceb78558-044a-4416-9ab6-089ef939bbef\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.990260 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-config-data\") pod \"ceb78558-044a-4416-9ab6-089ef939bbef\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.990336 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz5f9\" (UniqueName: \"kubernetes.io/projected/ceb78558-044a-4416-9ab6-089ef939bbef-kube-api-access-pz5f9\") pod \"ceb78558-044a-4416-9ab6-089ef939bbef\" (UID: \"ceb78558-044a-4416-9ab6-089ef939bbef\") " Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.990334 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb78558-044a-4416-9ab6-089ef939bbef-logs" (OuterVolumeSpecName: "logs") pod "ceb78558-044a-4416-9ab6-089ef939bbef" (UID: "ceb78558-044a-4416-9ab6-089ef939bbef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.990690 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceb78558-044a-4416-9ab6-089ef939bbef-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:28 crc kubenswrapper[4813]: I1202 10:37:28.995771 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb78558-044a-4416-9ab6-089ef939bbef-kube-api-access-pz5f9" (OuterVolumeSpecName: "kube-api-access-pz5f9") pod "ceb78558-044a-4416-9ab6-089ef939bbef" (UID: "ceb78558-044a-4416-9ab6-089ef939bbef"). InnerVolumeSpecName "kube-api-access-pz5f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.016017 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceb78558-044a-4416-9ab6-089ef939bbef" (UID: "ceb78558-044a-4416-9ab6-089ef939bbef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.016344 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-config-data" (OuterVolumeSpecName: "config-data") pod "ceb78558-044a-4416-9ab6-089ef939bbef" (UID: "ceb78558-044a-4416-9ab6-089ef939bbef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.033174 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ceb78558-044a-4416-9ab6-089ef939bbef" (UID: "ceb78558-044a-4416-9ab6-089ef939bbef"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.091867 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz5f9\" (UniqueName: \"kubernetes.io/projected/ceb78558-044a-4416-9ab6-089ef939bbef-kube-api-access-pz5f9\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.091898 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.091907 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.091917 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb78558-044a-4416-9ab6-089ef939bbef-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.206227 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 10:37:29 crc kubenswrapper[4813]: W1202 10:37:29.206300 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ef24984_82f3_4b10_997c_3051d8a59c5f.slice/crio-7c2d60ea9fba24d7a74b42c78064af41a3c728862d1dc8f105152f7b4227574a WatchSource:0}: Error finding container 7c2d60ea9fba24d7a74b42c78064af41a3c728862d1dc8f105152f7b4227574a: Status 404 returned error can't find the container with id 7c2d60ea9fba24d7a74b42c78064af41a3c728862d1dc8f105152f7b4227574a Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.292365 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.346898 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6ef24984-82f3-4b10-997c-3051d8a59c5f","Type":"ContainerStarted","Data":"7c2d60ea9fba24d7a74b42c78064af41a3c728862d1dc8f105152f7b4227574a"} Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.349770 4813 generic.go:334] "Generic (PLEG): container finished" podID="ceb78558-044a-4416-9ab6-089ef939bbef" containerID="620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651" exitCode=0 Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.349794 4813 generic.go:334] "Generic (PLEG): container finished" podID="ceb78558-044a-4416-9ab6-089ef939bbef" containerID="eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96" exitCode=143 Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.349855 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.349894 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceb78558-044a-4416-9ab6-089ef939bbef","Type":"ContainerDied","Data":"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651"} Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.349928 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceb78558-044a-4416-9ab6-089ef939bbef","Type":"ContainerDied","Data":"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96"} Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.349941 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bbe1934f-df77-4479-ba6b-183b6fa08d29" containerName="nova-scheduler-scheduler" containerID="cri-o://f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334" gracePeriod=30 Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.349957 4813 scope.go:117] "RemoveContainer" containerID="620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.349946 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ceb78558-044a-4416-9ab6-089ef939bbef","Type":"ContainerDied","Data":"897e584659eb5dcaf03d8591d3a3dc1860b737d4004a8f70f8c173a069ecbee4"} Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.395994 4813 scope.go:117] "RemoveContainer" containerID="eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.411941 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.426729 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.436437 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:29 crc kubenswrapper[4813]: E1202 10:37:29.436905 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" containerName="nova-metadata-metadata" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.436928 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" containerName="nova-metadata-metadata" Dec 02 10:37:29 crc kubenswrapper[4813]: E1202 10:37:29.436943 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" containerName="nova-metadata-log" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.436950 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" containerName="nova-metadata-log" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.437179 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" containerName="nova-metadata-metadata" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.437199 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" containerName="nova-metadata-log" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.438284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.440424 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.440640 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.448783 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.455279 4813 scope.go:117] "RemoveContainer" containerID="620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651" Dec 02 10:37:29 crc kubenswrapper[4813]: E1202 10:37:29.456051 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651\": container with ID starting with 620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651 not found: ID does not exist" containerID="620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.456106 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651"} err="failed to get container status \"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651\": rpc error: code = NotFound desc = could not find container \"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651\": container with ID starting with 620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651 not found: ID does not exist" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.456134 4813 scope.go:117] "RemoveContainer" containerID="eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96" Dec 02 10:37:29 crc kubenswrapper[4813]: E1202 10:37:29.456595 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96\": container with ID starting with eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96 not found: ID does not exist" containerID="eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.456634 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96"} err="failed to get container status \"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96\": rpc error: code = NotFound desc = could not find container \"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96\": container with ID starting with eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96 not found: ID does not exist" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.456664 4813 scope.go:117] "RemoveContainer" containerID="620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.456888 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651"} err="failed to get container status \"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651\": rpc error: code = NotFound desc = could not find container \"620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651\": container with ID starting with 620b9e980e0419478823e257266bc614ed4d0d29748a30a3d298f72775c8e651 not found: ID does not exist" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.456916 4813 scope.go:117] "RemoveContainer" containerID="eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.457333 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96"} err="failed to get container status \"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96\": rpc error: code = NotFound desc = could not find container \"eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96\": container with ID starting with eb009bcbe39dacc2a8533bb8d0815ed15f65642917fd56b2f95161c79f366c96 not found: ID does not exist" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.498893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.498957 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblvr\" (UniqueName: \"kubernetes.io/projected/7a804fb5-4ce7-42e5-beb8-d301ece0f571-kube-api-access-fblvr\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.499012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.499282 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-config-data\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.499370 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a804fb5-4ce7-42e5-beb8-d301ece0f571-logs\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.600783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.601742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fblvr\" (UniqueName: \"kubernetes.io/projected/7a804fb5-4ce7-42e5-beb8-d301ece0f571-kube-api-access-fblvr\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.601801 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.601876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-config-data\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.601894 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a804fb5-4ce7-42e5-beb8-d301ece0f571-logs\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.602232 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a804fb5-4ce7-42e5-beb8-d301ece0f571-logs\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.607258 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.616684 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.619685 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-config-data\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.625604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fblvr\" (UniqueName: \"kubernetes.io/projected/7a804fb5-4ce7-42e5-beb8-d301ece0f571-kube-api-access-fblvr\") pod \"nova-metadata-0\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " pod="openstack/nova-metadata-0" Dec 02 10:37:29 crc kubenswrapper[4813]: I1202 10:37:29.769571 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:37:30 crc kubenswrapper[4813]: I1202 10:37:30.080938 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb78558-044a-4416-9ab6-089ef939bbef" path="/var/lib/kubelet/pods/ceb78558-044a-4416-9ab6-089ef939bbef/volumes" Dec 02 10:37:30 crc kubenswrapper[4813]: I1202 10:37:30.231309 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:37:30 crc kubenswrapper[4813]: W1202 10:37:30.233993 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a804fb5_4ce7_42e5_beb8_d301ece0f571.slice/crio-2787a6f1cf3527c2da4117798799e6ffd684ba9b72cc0b2611d0cbc6f932c08b WatchSource:0}: Error finding container 2787a6f1cf3527c2da4117798799e6ffd684ba9b72cc0b2611d0cbc6f932c08b: Status 404 returned error can't find the container with id 2787a6f1cf3527c2da4117798799e6ffd684ba9b72cc0b2611d0cbc6f932c08b Dec 02 10:37:30 crc kubenswrapper[4813]: I1202 10:37:30.364714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6ef24984-82f3-4b10-997c-3051d8a59c5f","Type":"ContainerStarted","Data":"1f8e895cabf5e92f0a670d633edc381d04ff6a4d8949373ef17dbe88a8eb126f"} Dec 02 10:37:30 crc kubenswrapper[4813]: I1202 10:37:30.365869 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:30 crc kubenswrapper[4813]: I1202 10:37:30.369757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a804fb5-4ce7-42e5-beb8-d301ece0f571","Type":"ContainerStarted","Data":"2787a6f1cf3527c2da4117798799e6ffd684ba9b72cc0b2611d0cbc6f932c08b"} Dec 02 10:37:30 crc kubenswrapper[4813]: I1202 10:37:30.391737 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.39170937 podStartE2EDuration="2.39170937s" podCreationTimestamp="2025-12-02 10:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:30.385351439 +0000 UTC m=+1774.580525741" watchObservedRunningTime="2025-12-02 10:37:30.39170937 +0000 UTC m=+1774.586883672" Dec 02 10:37:31 crc kubenswrapper[4813]: E1202 10:37:31.355969 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:37:31 crc kubenswrapper[4813]: E1202 10:37:31.357677 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:37:31 crc kubenswrapper[4813]: E1202 10:37:31.359499 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:37:31 crc kubenswrapper[4813]: E1202 10:37:31.359534 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bbe1934f-df77-4479-ba6b-183b6fa08d29" containerName="nova-scheduler-scheduler" Dec 02 10:37:31 crc kubenswrapper[4813]: I1202 10:37:31.380575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a804fb5-4ce7-42e5-beb8-d301ece0f571","Type":"ContainerStarted","Data":"db0d4ed82c2c133f26e05b0ac71fa1b871ea8bc01e11c40a6d389d1b330865b9"} Dec 02 10:37:31 crc kubenswrapper[4813]: I1202 10:37:31.380613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a804fb5-4ce7-42e5-beb8-d301ece0f571","Type":"ContainerStarted","Data":"7a8ca8ad5c1cdcec9606c7ec949d73d3d668e55205fc6e6f620b6385f5907110"} Dec 02 10:37:31 crc kubenswrapper[4813]: I1202 10:37:31.402561 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.402542196 podStartE2EDuration="2.402542196s" podCreationTimestamp="2025-12-02 10:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:31.39778146 +0000 UTC m=+1775.592955762" watchObservedRunningTime="2025-12-02 10:37:31.402542196 +0000 UTC m=+1775.597716498" Dec 02 10:37:32 crc kubenswrapper[4813]: I1202 10:37:32.218277 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:37:32 crc kubenswrapper[4813]: I1202 10:37:32.218686 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="aeb0e843-c886-42eb-844c-a544d47c8c94" containerName="kube-state-metrics" containerID="cri-o://2c5b071e407282189c6ae92d2d2471cb9a73bea0bb6cd4caff1e9189fa9e6e02" gracePeriod=30 Dec 02 10:37:32 crc kubenswrapper[4813]: I1202 10:37:32.389416 4813 generic.go:334] "Generic (PLEG): container finished" podID="aeb0e843-c886-42eb-844c-a544d47c8c94" containerID="2c5b071e407282189c6ae92d2d2471cb9a73bea0bb6cd4caff1e9189fa9e6e02" exitCode=2 Dec 02 10:37:32 crc kubenswrapper[4813]: I1202 10:37:32.389498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aeb0e843-c886-42eb-844c-a544d47c8c94","Type":"ContainerDied","Data":"2c5b071e407282189c6ae92d2d2471cb9a73bea0bb6cd4caff1e9189fa9e6e02"} Dec 02 10:37:32 crc kubenswrapper[4813]: I1202 10:37:32.647534 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:37:32 crc kubenswrapper[4813]: I1202 10:37:32.764267 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6b9v\" (UniqueName: \"kubernetes.io/projected/aeb0e843-c886-42eb-844c-a544d47c8c94-kube-api-access-k6b9v\") pod \"aeb0e843-c886-42eb-844c-a544d47c8c94\" (UID: \"aeb0e843-c886-42eb-844c-a544d47c8c94\") " Dec 02 10:37:32 crc kubenswrapper[4813]: I1202 10:37:32.771993 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb0e843-c886-42eb-844c-a544d47c8c94-kube-api-access-k6b9v" (OuterVolumeSpecName: "kube-api-access-k6b9v") pod "aeb0e843-c886-42eb-844c-a544d47c8c94" (UID: "aeb0e843-c886-42eb-844c-a544d47c8c94"). InnerVolumeSpecName "kube-api-access-k6b9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:32 crc kubenswrapper[4813]: I1202 10:37:32.866645 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6b9v\" (UniqueName: \"kubernetes.io/projected/aeb0e843-c886-42eb-844c-a544d47c8c94-kube-api-access-k6b9v\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.031578 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.171265 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-config-data\") pod \"bbe1934f-df77-4479-ba6b-183b6fa08d29\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.171320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-combined-ca-bundle\") pod \"bbe1934f-df77-4479-ba6b-183b6fa08d29\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.171366 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clw6d\" (UniqueName: \"kubernetes.io/projected/bbe1934f-df77-4479-ba6b-183b6fa08d29-kube-api-access-clw6d\") pod \"bbe1934f-df77-4479-ba6b-183b6fa08d29\" (UID: \"bbe1934f-df77-4479-ba6b-183b6fa08d29\") " Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.176046 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe1934f-df77-4479-ba6b-183b6fa08d29-kube-api-access-clw6d" (OuterVolumeSpecName: "kube-api-access-clw6d") pod "bbe1934f-df77-4479-ba6b-183b6fa08d29" (UID: "bbe1934f-df77-4479-ba6b-183b6fa08d29"). InnerVolumeSpecName "kube-api-access-clw6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.202004 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbe1934f-df77-4479-ba6b-183b6fa08d29" (UID: "bbe1934f-df77-4479-ba6b-183b6fa08d29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.207385 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-config-data" (OuterVolumeSpecName: "config-data") pod "bbe1934f-df77-4479-ba6b-183b6fa08d29" (UID: "bbe1934f-df77-4479-ba6b-183b6fa08d29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.273426 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clw6d\" (UniqueName: \"kubernetes.io/projected/bbe1934f-df77-4479-ba6b-183b6fa08d29-kube-api-access-clw6d\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.273466 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.273479 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe1934f-df77-4479-ba6b-183b6fa08d29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.274477 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.275032 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="proxy-httpd" containerID="cri-o://b9c63629fa4e90ecd6d0093f2eac2a674c37660ac0152088cdba2bacb4cabe99" gracePeriod=30 Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.275188 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="sg-core" containerID="cri-o://99836f517af35f23412f53998f161b3665c656ea7393fe39e78980cfaf5ee1a9" gracePeriod=30 Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.275286 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="ceilometer-notification-agent" containerID="cri-o://352245dda1befdbff2327d23561882a11168682f6ff17a8cdb17ccd3fd57a5f5" gracePeriod=30 Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.275322 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="ceilometer-central-agent" containerID="cri-o://3fad0cac112aee8d41a6bfd78fa8f4dbfbcf4c3b10c39def39753c7097beb40b" gracePeriod=30 Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.281834 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.374227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-combined-ca-bundle\") pod \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.374488 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jn4v\" (UniqueName: \"kubernetes.io/projected/d9c4d29c-2fb2-430a-a07d-f8b060caf264-kube-api-access-7jn4v\") pod \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.374552 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-config-data\") pod \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.374617 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c4d29c-2fb2-430a-a07d-f8b060caf264-logs\") pod \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\" (UID: \"d9c4d29c-2fb2-430a-a07d-f8b060caf264\") " Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.375489 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c4d29c-2fb2-430a-a07d-f8b060caf264-logs" (OuterVolumeSpecName: "logs") pod "d9c4d29c-2fb2-430a-a07d-f8b060caf264" (UID: "d9c4d29c-2fb2-430a-a07d-f8b060caf264"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.378146 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c4d29c-2fb2-430a-a07d-f8b060caf264-kube-api-access-7jn4v" (OuterVolumeSpecName: "kube-api-access-7jn4v") pod "d9c4d29c-2fb2-430a-a07d-f8b060caf264" (UID: "d9c4d29c-2fb2-430a-a07d-f8b060caf264"). InnerVolumeSpecName "kube-api-access-7jn4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.405851 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aeb0e843-c886-42eb-844c-a544d47c8c94","Type":"ContainerDied","Data":"4dd220f75990b613d47856437e9d4266f4b4f593f873a4fa70cbbff199d5196f"} Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.405896 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.405904 4813 scope.go:117] "RemoveContainer" containerID="2c5b071e407282189c6ae92d2d2471cb9a73bea0bb6cd4caff1e9189fa9e6e02" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.406106 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-config-data" (OuterVolumeSpecName: "config-data") pod "d9c4d29c-2fb2-430a-a07d-f8b060caf264" (UID: "d9c4d29c-2fb2-430a-a07d-f8b060caf264"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.408149 4813 generic.go:334] "Generic (PLEG): container finished" podID="bbe1934f-df77-4479-ba6b-183b6fa08d29" containerID="f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334" exitCode=0 Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.408189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bbe1934f-df77-4479-ba6b-183b6fa08d29","Type":"ContainerDied","Data":"f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334"} Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.408205 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bbe1934f-df77-4479-ba6b-183b6fa08d29","Type":"ContainerDied","Data":"3ae85e42ac5fd56e36f1b87edc0458b83c1c28df69a951e57b82f54c2699a140"} Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.408251 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.412822 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9c4d29c-2fb2-430a-a07d-f8b060caf264" (UID: "d9c4d29c-2fb2-430a-a07d-f8b060caf264"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.414738 4813 generic.go:334] "Generic (PLEG): container finished" podID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerID="a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2" exitCode=0 Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.414781 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d9c4d29c-2fb2-430a-a07d-f8b060caf264","Type":"ContainerDied","Data":"a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2"} Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.414818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d9c4d29c-2fb2-430a-a07d-f8b060caf264","Type":"ContainerDied","Data":"3c991cf332b110f009796a6c59430e1cef7c6d539416a1ebfa195998fd55b4e3"} Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.414915 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.476974 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jn4v\" (UniqueName: \"kubernetes.io/projected/d9c4d29c-2fb2-430a-a07d-f8b060caf264-kube-api-access-7jn4v\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.477009 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.477020 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c4d29c-2fb2-430a-a07d-f8b060caf264-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.477029 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c4d29c-2fb2-430a-a07d-f8b060caf264-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.493382 4813 scope.go:117] "RemoveContainer" containerID="f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.494904 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.501834 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.517994 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.576924 4813 scope.go:117] "RemoveContainer" containerID="f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334" Dec 02 10:37:33 crc kubenswrapper[4813]: E1202 10:37:33.578796 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334\": container with ID starting with f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334 not found: ID does not exist" containerID="f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.578843 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334"} err="failed to get container status \"f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334\": rpc error: code = NotFound desc = could not find container \"f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334\": container with ID starting with f5eb38df9e105d111d83b6130bd4e99cbdd422c101a52fa91790ae72199ca334 not found: ID does not exist" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.578883 4813 scope.go:117] "RemoveContainer" containerID="a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.591447 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.608404 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: E1202 10:37:33.609195 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-api" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.609222 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-api" Dec 02 10:37:33 crc kubenswrapper[4813]: E1202 10:37:33.609236 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb0e843-c886-42eb-844c-a544d47c8c94" containerName="kube-state-metrics" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.609245 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb0e843-c886-42eb-844c-a544d47c8c94" containerName="kube-state-metrics" Dec 02 10:37:33 crc kubenswrapper[4813]: E1202 10:37:33.609286 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe1934f-df77-4479-ba6b-183b6fa08d29" containerName="nova-scheduler-scheduler" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.609295 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe1934f-df77-4479-ba6b-183b6fa08d29" containerName="nova-scheduler-scheduler" Dec 02 10:37:33 crc kubenswrapper[4813]: E1202 10:37:33.609304 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-log" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.609312 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-log" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.609533 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe1934f-df77-4479-ba6b-183b6fa08d29" containerName="nova-scheduler-scheduler" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.609553 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-log" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.609562 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" containerName="nova-api-api" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.609581 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb0e843-c886-42eb-844c-a544d47c8c94" containerName="kube-state-metrics" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.610310 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.617138 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.618180 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.625437 4813 scope.go:117] "RemoveContainer" containerID="95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.626813 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.638537 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.646621 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.648278 4813 scope.go:117] "RemoveContainer" containerID="a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.648953 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: E1202 10:37:33.649283 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2\": container with ID starting with a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2 not found: ID does not exist" containerID="a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.649335 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2"} err="failed to get container status \"a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2\": rpc error: code = NotFound desc = could not find container \"a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2\": container with ID starting with a2726557253b4ab0edc9e36d03d8f0abd24b85932cfb8fc043f45072ea6b84e2 not found: ID does not exist" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.649368 4813 scope.go:117] "RemoveContainer" containerID="95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497" Dec 02 10:37:33 crc kubenswrapper[4813]: E1202 10:37:33.649833 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497\": container with ID starting with 95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497 not found: ID does not exist" containerID="95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.649866 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497"} err="failed to get container status \"95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497\": rpc error: code = NotFound desc = could not find container \"95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497\": container with ID starting with 95a5a864decdcd81adc29db34bb3caad8b788249c578c38f32b0aad4e2987497 not found: ID does not exist" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.652121 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.658943 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.660120 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.662276 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.663234 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.670415 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.681168 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.684182 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwrf\" (UniqueName: \"kubernetes.io/projected/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-kube-api-access-kmwrf\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.684262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-config-data\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.684368 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.785530 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtx7\" (UniqueName: \"kubernetes.io/projected/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-kube-api-access-ggtx7\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.785783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwrf\" (UniqueName: \"kubernetes.io/projected/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-kube-api-access-kmwrf\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.785881 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-config-data\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.785979 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-logs\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.786116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.786213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.786290 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzq5\" (UniqueName: \"kubernetes.io/projected/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-api-access-4rzq5\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.786371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.786445 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.786518 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-config-data\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.786590 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.791666 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.793295 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-config-data\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.804787 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwrf\" (UniqueName: \"kubernetes.io/projected/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-kube-api-access-kmwrf\") pod \"nova-scheduler-0\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.888448 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggtx7\" (UniqueName: \"kubernetes.io/projected/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-kube-api-access-ggtx7\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.888532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-logs\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.888561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.888603 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzq5\" (UniqueName: \"kubernetes.io/projected/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-api-access-4rzq5\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.888622 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.888644 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.888664 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-config-data\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.888681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.889390 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-logs\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.892323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.892374 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.892411 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-config-data\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.893173 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.893642 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.904606 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzq5\" (UniqueName: \"kubernetes.io/projected/4c015ea5-0d0f-4139-8a7e-eb19d478f879-kube-api-access-4rzq5\") pod \"kube-state-metrics-0\" (UID: \"4c015ea5-0d0f-4139-8a7e-eb19d478f879\") " pod="openstack/kube-state-metrics-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.909656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggtx7\" (UniqueName: \"kubernetes.io/projected/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-kube-api-access-ggtx7\") pod \"nova-api-0\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.934595 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.970615 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:37:33 crc kubenswrapper[4813]: I1202 10:37:33.978893 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.081912 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb0e843-c886-42eb-844c-a544d47c8c94" path="/var/lib/kubelet/pods/aeb0e843-c886-42eb-844c-a544d47c8c94/volumes" Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.095797 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe1934f-df77-4479-ba6b-183b6fa08d29" path="/var/lib/kubelet/pods/bbe1934f-df77-4479-ba6b-183b6fa08d29/volumes" Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.096499 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c4d29c-2fb2-430a-a07d-f8b060caf264" path="/var/lib/kubelet/pods/d9c4d29c-2fb2-430a-a07d-f8b060caf264/volumes" Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.384482 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:37:34 crc kubenswrapper[4813]: W1202 10:37:34.384680 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd1cea75_2a71_4ef2_b4e4_2072f825dc10.slice/crio-32e378d75c4c91a303f854fcb079c6c3ce24151c7a79818db7f70639b546a13a WatchSource:0}: Error finding container 32e378d75c4c91a303f854fcb079c6c3ce24151c7a79818db7f70639b546a13a: Status 404 returned error can't find the container with id 32e378d75c4c91a303f854fcb079c6c3ce24151c7a79818db7f70639b546a13a Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.429111 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd1cea75-2a71-4ef2-b4e4-2072f825dc10","Type":"ContainerStarted","Data":"32e378d75c4c91a303f854fcb079c6c3ce24151c7a79818db7f70639b546a13a"} Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.432234 4813 generic.go:334] "Generic (PLEG): container finished" podID="3493a790-ccfa-47e0-8182-437b5581f397" containerID="b9c63629fa4e90ecd6d0093f2eac2a674c37660ac0152088cdba2bacb4cabe99" exitCode=0 Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.432259 4813 generic.go:334] "Generic (PLEG): container finished" podID="3493a790-ccfa-47e0-8182-437b5581f397" containerID="99836f517af35f23412f53998f161b3665c656ea7393fe39e78980cfaf5ee1a9" exitCode=2 Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.432269 4813 generic.go:334] "Generic (PLEG): container finished" podID="3493a790-ccfa-47e0-8182-437b5581f397" containerID="3fad0cac112aee8d41a6bfd78fa8f4dbfbcf4c3b10c39def39753c7097beb40b" exitCode=0 Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.432287 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerDied","Data":"b9c63629fa4e90ecd6d0093f2eac2a674c37660ac0152088cdba2bacb4cabe99"} Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.432307 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerDied","Data":"99836f517af35f23412f53998f161b3665c656ea7393fe39e78980cfaf5ee1a9"} Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.432316 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerDied","Data":"3fad0cac112aee8d41a6bfd78fa8f4dbfbcf4c3b10c39def39753c7097beb40b"} Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.496143 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.511571 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:34 crc kubenswrapper[4813]: W1202 10:37:34.511633 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c015ea5_0d0f_4139_8a7e_eb19d478f879.slice/crio-34732fe613d5b9d4ed91a6470d221c428016d06bd119523f7359555fa518db32 WatchSource:0}: Error finding container 34732fe613d5b9d4ed91a6470d221c428016d06bd119523f7359555fa518db32: Status 404 returned error can't find the container with id 34732fe613d5b9d4ed91a6470d221c428016d06bd119523f7359555fa518db32 Dec 02 10:37:34 crc kubenswrapper[4813]: W1202 10:37:34.519320 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d6a7fe2_7bb8_4605_99a8_b46dd10aaef8.slice/crio-16fd93942f8d7e2dbef847fb3118763b17ff2d95297023ad4777af05dfed75b9 WatchSource:0}: Error finding container 16fd93942f8d7e2dbef847fb3118763b17ff2d95297023ad4777af05dfed75b9: Status 404 returned error can't find the container with id 16fd93942f8d7e2dbef847fb3118763b17ff2d95297023ad4777af05dfed75b9 Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.770161 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:37:34 crc kubenswrapper[4813]: I1202 10:37:34.770459 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.440130 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c015ea5-0d0f-4139-8a7e-eb19d478f879","Type":"ContainerStarted","Data":"7c229c39c204f5c5efc20a9f4f16e8c24b5118d1c416720bdfc9fe12e48e4c55"} Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.440438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c015ea5-0d0f-4139-8a7e-eb19d478f879","Type":"ContainerStarted","Data":"34732fe613d5b9d4ed91a6470d221c428016d06bd119523f7359555fa518db32"} Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.440456 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.442026 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd1cea75-2a71-4ef2-b4e4-2072f825dc10","Type":"ContainerStarted","Data":"5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c"} Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.445570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8","Type":"ContainerStarted","Data":"56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77"} Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.445608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8","Type":"ContainerStarted","Data":"b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a"} Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.445625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8","Type":"ContainerStarted","Data":"16fd93942f8d7e2dbef847fb3118763b17ff2d95297023ad4777af05dfed75b9"} Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.461737 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.092776572 podStartE2EDuration="2.461719398s" podCreationTimestamp="2025-12-02 10:37:33 +0000 UTC" firstStartedPulling="2025-12-02 10:37:34.514401698 +0000 UTC m=+1778.709576000" lastFinishedPulling="2025-12-02 10:37:34.883344524 +0000 UTC m=+1779.078518826" observedRunningTime="2025-12-02 10:37:35.453860433 +0000 UTC m=+1779.649034735" watchObservedRunningTime="2025-12-02 10:37:35.461719398 +0000 UTC m=+1779.656893700" Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.485493 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.485456616 podStartE2EDuration="2.485456616s" podCreationTimestamp="2025-12-02 10:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:35.482181153 +0000 UTC m=+1779.677355475" watchObservedRunningTime="2025-12-02 10:37:35.485456616 +0000 UTC m=+1779.680630918" Dec 02 10:37:35 crc kubenswrapper[4813]: I1202 10:37:35.513400 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.513379505 podStartE2EDuration="2.513379505s" podCreationTimestamp="2025-12-02 10:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:35.498204761 +0000 UTC m=+1779.693379073" watchObservedRunningTime="2025-12-02 10:37:35.513379505 +0000 UTC m=+1779.708553807" Dec 02 10:37:36 crc kubenswrapper[4813]: I1202 10:37:36.084882 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:37:36 crc kubenswrapper[4813]: E1202 10:37:36.086993 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.463990 4813 generic.go:334] "Generic (PLEG): container finished" podID="3493a790-ccfa-47e0-8182-437b5581f397" containerID="352245dda1befdbff2327d23561882a11168682f6ff17a8cdb17ccd3fd57a5f5" exitCode=0 Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.464053 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerDied","Data":"352245dda1befdbff2327d23561882a11168682f6ff17a8cdb17ccd3fd57a5f5"} Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.601965 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.656215 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-run-httpd\") pod \"3493a790-ccfa-47e0-8182-437b5581f397\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.656516 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-sg-core-conf-yaml\") pod \"3493a790-ccfa-47e0-8182-437b5581f397\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.656601 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6trcr\" (UniqueName: \"kubernetes.io/projected/3493a790-ccfa-47e0-8182-437b5581f397-kube-api-access-6trcr\") pod \"3493a790-ccfa-47e0-8182-437b5581f397\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.656747 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3493a790-ccfa-47e0-8182-437b5581f397" (UID: "3493a790-ccfa-47e0-8182-437b5581f397"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.656762 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-combined-ca-bundle\") pod \"3493a790-ccfa-47e0-8182-437b5581f397\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.656849 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-log-httpd\") pod \"3493a790-ccfa-47e0-8182-437b5581f397\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.656876 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-scripts\") pod \"3493a790-ccfa-47e0-8182-437b5581f397\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.656919 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-config-data\") pod \"3493a790-ccfa-47e0-8182-437b5581f397\" (UID: \"3493a790-ccfa-47e0-8182-437b5581f397\") " Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.657709 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.657784 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3493a790-ccfa-47e0-8182-437b5581f397" (UID: "3493a790-ccfa-47e0-8182-437b5581f397"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.662271 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3493a790-ccfa-47e0-8182-437b5581f397-kube-api-access-6trcr" (OuterVolumeSpecName: "kube-api-access-6trcr") pod "3493a790-ccfa-47e0-8182-437b5581f397" (UID: "3493a790-ccfa-47e0-8182-437b5581f397"). InnerVolumeSpecName "kube-api-access-6trcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.673473 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-scripts" (OuterVolumeSpecName: "scripts") pod "3493a790-ccfa-47e0-8182-437b5581f397" (UID: "3493a790-ccfa-47e0-8182-437b5581f397"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.694161 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3493a790-ccfa-47e0-8182-437b5581f397" (UID: "3493a790-ccfa-47e0-8182-437b5581f397"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.745678 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3493a790-ccfa-47e0-8182-437b5581f397" (UID: "3493a790-ccfa-47e0-8182-437b5581f397"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.759199 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.759238 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6trcr\" (UniqueName: \"kubernetes.io/projected/3493a790-ccfa-47e0-8182-437b5581f397-kube-api-access-6trcr\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.759252 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.759261 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3493a790-ccfa-47e0-8182-437b5581f397-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.759270 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.772227 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-config-data" (OuterVolumeSpecName: "config-data") pod "3493a790-ccfa-47e0-8182-437b5581f397" (UID: "3493a790-ccfa-47e0-8182-437b5581f397"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:37 crc kubenswrapper[4813]: I1202 10:37:37.860635 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3493a790-ccfa-47e0-8182-437b5581f397-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.478332 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3493a790-ccfa-47e0-8182-437b5581f397","Type":"ContainerDied","Data":"5db2e05eac308b46ccb507811fea5042b2c2d2ffbeb09b337de85fcb64306063"} Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.478408 4813 scope.go:117] "RemoveContainer" containerID="b9c63629fa4e90ecd6d0093f2eac2a674c37660ac0152088cdba2bacb4cabe99" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.479533 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.506469 4813 scope.go:117] "RemoveContainer" containerID="99836f517af35f23412f53998f161b3665c656ea7393fe39e78980cfaf5ee1a9" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.507610 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.518199 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.534447 4813 scope.go:117] "RemoveContainer" containerID="352245dda1befdbff2327d23561882a11168682f6ff17a8cdb17ccd3fd57a5f5" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.541380 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:38 crc kubenswrapper[4813]: E1202 10:37:38.541803 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="sg-core" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.541828 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="sg-core" Dec 02 10:37:38 crc kubenswrapper[4813]: E1202 10:37:38.541852 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="proxy-httpd" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.541861 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="proxy-httpd" Dec 02 10:37:38 crc kubenswrapper[4813]: E1202 10:37:38.541875 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="ceilometer-notification-agent" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.541884 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="ceilometer-notification-agent" Dec 02 10:37:38 crc kubenswrapper[4813]: E1202 10:37:38.541913 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="ceilometer-central-agent" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.541922 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="ceilometer-central-agent" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.542150 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="proxy-httpd" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.542177 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="sg-core" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.542188 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="ceilometer-central-agent" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.542208 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3493a790-ccfa-47e0-8182-437b5581f397" containerName="ceilometer-notification-agent" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.544191 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.548545 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.548842 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.549907 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.557261 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.565773 4813 scope.go:117] "RemoveContainer" containerID="3fad0cac112aee8d41a6bfd78fa8f4dbfbcf4c3b10c39def39753c7097beb40b" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.576541 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-log-httpd\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.576614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-scripts\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.576639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.576695 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dslhl\" (UniqueName: \"kubernetes.io/projected/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-kube-api-access-dslhl\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.576732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-config-data\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.576761 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.576847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-run-httpd\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.576873 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.678710 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-run-httpd\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.679182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.679288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-log-httpd\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.679338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.679359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-scripts\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.679415 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dslhl\" (UniqueName: \"kubernetes.io/projected/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-kube-api-access-dslhl\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.679454 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-config-data\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.679477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.679841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-run-httpd\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.680638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-log-httpd\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.686898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-scripts\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.687047 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.690797 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.691198 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-config-data\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.697844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.700968 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dslhl\" (UniqueName: \"kubernetes.io/projected/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-kube-api-access-dslhl\") pod \"ceilometer-0\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.769338 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.879255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:37:38 crc kubenswrapper[4813]: I1202 10:37:38.935386 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 10:37:39 crc kubenswrapper[4813]: I1202 10:37:39.368033 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:39 crc kubenswrapper[4813]: W1202 10:37:39.372325 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45eb3740_3b6a_4dc7_b484_ce9fe4924ea8.slice/crio-a6563146cd0bf9954655ff097539e3745de0cfeb2ffecc4b07195eff99af299e WatchSource:0}: Error finding container a6563146cd0bf9954655ff097539e3745de0cfeb2ffecc4b07195eff99af299e: Status 404 returned error can't find the container with id a6563146cd0bf9954655ff097539e3745de0cfeb2ffecc4b07195eff99af299e Dec 02 10:37:39 crc kubenswrapper[4813]: I1202 10:37:39.487969 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerStarted","Data":"a6563146cd0bf9954655ff097539e3745de0cfeb2ffecc4b07195eff99af299e"} Dec 02 10:37:39 crc kubenswrapper[4813]: I1202 10:37:39.770269 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 10:37:39 crc kubenswrapper[4813]: I1202 10:37:39.771531 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 10:37:40 crc kubenswrapper[4813]: I1202 10:37:40.094083 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3493a790-ccfa-47e0-8182-437b5581f397" path="/var/lib/kubelet/pods/3493a790-ccfa-47e0-8182-437b5581f397/volumes" Dec 02 10:37:40 crc kubenswrapper[4813]: I1202 10:37:40.499585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerStarted","Data":"e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a"} Dec 02 10:37:40 crc kubenswrapper[4813]: I1202 10:37:40.785263 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:37:40 crc kubenswrapper[4813]: I1202 10:37:40.785268 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:37:42 crc kubenswrapper[4813]: I1202 10:37:42.520965 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerStarted","Data":"f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1"} Dec 02 10:37:43 crc kubenswrapper[4813]: I1202 10:37:43.533308 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerStarted","Data":"d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168"} Dec 02 10:37:43 crc kubenswrapper[4813]: I1202 10:37:43.934964 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 10:37:43 crc kubenswrapper[4813]: I1202 10:37:43.967378 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 10:37:43 crc kubenswrapper[4813]: I1202 10:37:43.971192 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:37:43 crc kubenswrapper[4813]: I1202 10:37:43.971229 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:37:43 crc kubenswrapper[4813]: I1202 10:37:43.993277 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 10:37:44 crc kubenswrapper[4813]: I1202 10:37:44.567352 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 10:37:45 crc kubenswrapper[4813]: I1202 10:37:45.054243 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:37:45 crc kubenswrapper[4813]: I1202 10:37:45.054571 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 10:37:45 crc kubenswrapper[4813]: I1202 10:37:45.557877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerStarted","Data":"280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53"} Dec 02 10:37:45 crc kubenswrapper[4813]: I1202 10:37:45.577978 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.262727147 podStartE2EDuration="7.577959016s" podCreationTimestamp="2025-12-02 10:37:38 +0000 UTC" firstStartedPulling="2025-12-02 10:37:39.375294969 +0000 UTC m=+1783.570469271" lastFinishedPulling="2025-12-02 10:37:44.690526838 +0000 UTC m=+1788.885701140" observedRunningTime="2025-12-02 10:37:45.576740321 +0000 UTC m=+1789.771914633" watchObservedRunningTime="2025-12-02 10:37:45.577959016 +0000 UTC m=+1789.773133318" Dec 02 10:37:46 crc kubenswrapper[4813]: I1202 10:37:46.565967 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:37:48 crc kubenswrapper[4813]: I1202 10:37:48.068289 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:37:48 crc kubenswrapper[4813]: E1202 10:37:48.068503 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:37:49 crc kubenswrapper[4813]: I1202 10:37:49.776379 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 10:37:49 crc kubenswrapper[4813]: I1202 10:37:49.776943 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 10:37:49 crc kubenswrapper[4813]: I1202 10:37:49.787381 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 10:37:50 crc kubenswrapper[4813]: I1202 10:37:50.611430 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.617433 4813 generic.go:334] "Generic (PLEG): container finished" podID="24cba461-894a-4ffb-84d8-33f354e4b9d7" containerID="cdc6ec14e0027af0f1e5fd08bdde3d9f9867690bb37cb120647529077ae9c3a4" exitCode=137 Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.617482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24cba461-894a-4ffb-84d8-33f354e4b9d7","Type":"ContainerDied","Data":"cdc6ec14e0027af0f1e5fd08bdde3d9f9867690bb37cb120647529077ae9c3a4"} Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.703829 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.727455 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-config-data\") pod \"24cba461-894a-4ffb-84d8-33f354e4b9d7\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.727558 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-combined-ca-bundle\") pod \"24cba461-894a-4ffb-84d8-33f354e4b9d7\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.728320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qds2l\" (UniqueName: \"kubernetes.io/projected/24cba461-894a-4ffb-84d8-33f354e4b9d7-kube-api-access-qds2l\") pod \"24cba461-894a-4ffb-84d8-33f354e4b9d7\" (UID: \"24cba461-894a-4ffb-84d8-33f354e4b9d7\") " Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.738414 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24cba461-894a-4ffb-84d8-33f354e4b9d7-kube-api-access-qds2l" (OuterVolumeSpecName: "kube-api-access-qds2l") pod "24cba461-894a-4ffb-84d8-33f354e4b9d7" (UID: "24cba461-894a-4ffb-84d8-33f354e4b9d7"). InnerVolumeSpecName "kube-api-access-qds2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.754214 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24cba461-894a-4ffb-84d8-33f354e4b9d7" (UID: "24cba461-894a-4ffb-84d8-33f354e4b9d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.767188 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-config-data" (OuterVolumeSpecName: "config-data") pod "24cba461-894a-4ffb-84d8-33f354e4b9d7" (UID: "24cba461-894a-4ffb-84d8-33f354e4b9d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.829672 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.829709 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24cba461-894a-4ffb-84d8-33f354e4b9d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:51 crc kubenswrapper[4813]: I1202 10:37:51.829724 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qds2l\" (UniqueName: \"kubernetes.io/projected/24cba461-894a-4ffb-84d8-33f354e4b9d7-kube-api-access-qds2l\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.632996 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24cba461-894a-4ffb-84d8-33f354e4b9d7","Type":"ContainerDied","Data":"19d3e13b2ea9ccb01773411d865f09e4c4f723a880b6a071049fe41656e0d12a"} Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.633061 4813 scope.go:117] "RemoveContainer" containerID="cdc6ec14e0027af0f1e5fd08bdde3d9f9867690bb37cb120647529077ae9c3a4" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.633016 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.656330 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.668766 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.683112 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:52 crc kubenswrapper[4813]: E1202 10:37:52.683607 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24cba461-894a-4ffb-84d8-33f354e4b9d7" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.683628 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="24cba461-894a-4ffb-84d8-33f354e4b9d7" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.683883 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="24cba461-894a-4ffb-84d8-33f354e4b9d7" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.684662 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.688242 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.688478 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.688726 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.693777 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.750393 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvh2\" (UniqueName: \"kubernetes.io/projected/04a01b98-1643-4e76-8fde-e7951d129581-kube-api-access-vmvh2\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.750836 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.751034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.751149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.751235 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.852817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.853229 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.853402 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.853521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmvh2\" (UniqueName: \"kubernetes.io/projected/04a01b98-1643-4e76-8fde-e7951d129581-kube-api-access-vmvh2\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.853646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.859501 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.859516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.860121 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.871786 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a01b98-1643-4e76-8fde-e7951d129581-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:52 crc kubenswrapper[4813]: I1202 10:37:52.872753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmvh2\" (UniqueName: \"kubernetes.io/projected/04a01b98-1643-4e76-8fde-e7951d129581-kube-api-access-vmvh2\") pod \"nova-cell1-novncproxy-0\" (UID: \"04a01b98-1643-4e76-8fde-e7951d129581\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.020194 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.434536 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 10:37:53 crc kubenswrapper[4813]: W1202 10:37:53.438423 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04a01b98_1643_4e76_8fde_e7951d129581.slice/crio-bce15207b7667fac55a6670d44e9c5a9b580c4eca90c42e33016f3e8c38deaac WatchSource:0}: Error finding container bce15207b7667fac55a6670d44e9c5a9b580c4eca90c42e33016f3e8c38deaac: Status 404 returned error can't find the container with id bce15207b7667fac55a6670d44e9c5a9b580c4eca90c42e33016f3e8c38deaac Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.645774 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04a01b98-1643-4e76-8fde-e7951d129581","Type":"ContainerStarted","Data":"11b6bf7d97764380b1cbff2c234f26c0e5cb5f494076962f3f6fe4320c7036a1"} Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.646742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04a01b98-1643-4e76-8fde-e7951d129581","Type":"ContainerStarted","Data":"bce15207b7667fac55a6670d44e9c5a9b580c4eca90c42e33016f3e8c38deaac"} Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.663017 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.662994931 podStartE2EDuration="1.662994931s" podCreationTimestamp="2025-12-02 10:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:53.660610153 +0000 UTC m=+1797.855784445" watchObservedRunningTime="2025-12-02 10:37:53.662994931 +0000 UTC m=+1797.858169233" Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.975803 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.976557 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.976703 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 10:37:53 crc kubenswrapper[4813]: I1202 10:37:53.978520 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.078345 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24cba461-894a-4ffb-84d8-33f354e4b9d7" path="/var/lib/kubelet/pods/24cba461-894a-4ffb-84d8-33f354e4b9d7/volumes" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.654656 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.658341 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.841637 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zmknr"] Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.844105 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.862273 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zmknr"] Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.992490 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-dns-svc\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.992573 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.992648 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-config\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.992772 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:54 crc kubenswrapper[4813]: I1202 10:37:54.992940 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pzxr\" (UniqueName: \"kubernetes.io/projected/621e5b06-b0c0-4563-a599-e03cc944cccc-kube-api-access-2pzxr\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.095222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-dns-svc\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.095302 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.095326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-config\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.095377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.095453 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pzxr\" (UniqueName: \"kubernetes.io/projected/621e5b06-b0c0-4563-a599-e03cc944cccc-kube-api-access-2pzxr\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.096257 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.096445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-config\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.096522 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.096809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-dns-svc\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.136289 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pzxr\" (UniqueName: \"kubernetes.io/projected/621e5b06-b0c0-4563-a599-e03cc944cccc-kube-api-access-2pzxr\") pod \"dnsmasq-dns-5b856c5697-zmknr\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.177859 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:55 crc kubenswrapper[4813]: W1202 10:37:55.659022 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621e5b06_b0c0_4563_a599_e03cc944cccc.slice/crio-f2251588473adf163f55377b81fa8788c4734863b4e79646ad93f1c7d66f107d WatchSource:0}: Error finding container f2251588473adf163f55377b81fa8788c4734863b4e79646ad93f1c7d66f107d: Status 404 returned error can't find the container with id f2251588473adf163f55377b81fa8788c4734863b4e79646ad93f1c7d66f107d Dec 02 10:37:55 crc kubenswrapper[4813]: I1202 10:37:55.660947 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zmknr"] Dec 02 10:37:56 crc kubenswrapper[4813]: I1202 10:37:56.675200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" event={"ID":"621e5b06-b0c0-4563-a599-e03cc944cccc","Type":"ContainerStarted","Data":"1bf19394cb6bdd889ac132b48a3fe1f34449761f1e21d23933e81fb1de54cdd4"} Dec 02 10:37:56 crc kubenswrapper[4813]: I1202 10:37:56.675782 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" event={"ID":"621e5b06-b0c0-4563-a599-e03cc944cccc","Type":"ContainerStarted","Data":"f2251588473adf163f55377b81fa8788c4734863b4e79646ad93f1c7d66f107d"} Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.081523 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.081791 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="ceilometer-central-agent" containerID="cri-o://e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a" gracePeriod=30 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.081845 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="sg-core" containerID="cri-o://d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168" gracePeriod=30 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.081907 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="ceilometer-notification-agent" containerID="cri-o://f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1" gracePeriod=30 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.081928 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="proxy-httpd" containerID="cri-o://280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53" gracePeriod=30 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.091506 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.184:3000/\": EOF" Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.264834 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.683692 4813 generic.go:334] "Generic (PLEG): container finished" podID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerID="280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53" exitCode=0 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.684001 4813 generic.go:334] "Generic (PLEG): container finished" podID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerID="d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168" exitCode=2 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.684014 4813 generic.go:334] "Generic (PLEG): container finished" podID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerID="e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a" exitCode=0 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.683736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerDied","Data":"280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53"} Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.684125 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerDied","Data":"d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168"} Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.684145 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerDied","Data":"e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a"} Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.685364 4813 generic.go:334] "Generic (PLEG): container finished" podID="621e5b06-b0c0-4563-a599-e03cc944cccc" containerID="1bf19394cb6bdd889ac132b48a3fe1f34449761f1e21d23933e81fb1de54cdd4" exitCode=0 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.685597 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-log" containerID="cri-o://b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a" gracePeriod=30 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.685640 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-api" containerID="cri-o://56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77" gracePeriod=30 Dec 02 10:37:57 crc kubenswrapper[4813]: I1202 10:37:57.685517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" event={"ID":"621e5b06-b0c0-4563-a599-e03cc944cccc","Type":"ContainerDied","Data":"1bf19394cb6bdd889ac132b48a3fe1f34449761f1e21d23933e81fb1de54cdd4"} Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.020847 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.702148 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.713005 4813 generic.go:334] "Generic (PLEG): container finished" podID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerID="f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1" exitCode=0 Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.713089 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerDied","Data":"f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1"} Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.713121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8","Type":"ContainerDied","Data":"a6563146cd0bf9954655ff097539e3745de0cfeb2ffecc4b07195eff99af299e"} Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.713167 4813 scope.go:117] "RemoveContainer" containerID="280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.724519 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" event={"ID":"621e5b06-b0c0-4563-a599-e03cc944cccc","Type":"ContainerStarted","Data":"a3a5a4c80dff66b1926bcfce11c24da4ecf3bb46bb522653617495a8e48d7dce"} Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.725610 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.743129 4813 generic.go:334] "Generic (PLEG): container finished" podID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerID="b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a" exitCode=143 Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.743176 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8","Type":"ContainerDied","Data":"b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a"} Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.783895 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" podStartSLOduration=4.783874883 podStartE2EDuration="4.783874883s" podCreationTimestamp="2025-12-02 10:37:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:37:58.771929952 +0000 UTC m=+1802.967104264" watchObservedRunningTime="2025-12-02 10:37:58.783874883 +0000 UTC m=+1802.979049185" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.789227 4813 scope.go:117] "RemoveContainer" containerID="d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.818210 4813 scope.go:117] "RemoveContainer" containerID="f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.838564 4813 scope.go:117] "RemoveContainer" containerID="e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.855748 4813 scope.go:117] "RemoveContainer" containerID="280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53" Dec 02 10:37:58 crc kubenswrapper[4813]: E1202 10:37:58.856319 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53\": container with ID starting with 280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53 not found: ID does not exist" containerID="280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.856384 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53"} err="failed to get container status \"280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53\": rpc error: code = NotFound desc = could not find container \"280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53\": container with ID starting with 280024adba450222efa52222996375a51d9677037ec60ae078a4ab4423354f53 not found: ID does not exist" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.856413 4813 scope.go:117] "RemoveContainer" containerID="d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168" Dec 02 10:37:58 crc kubenswrapper[4813]: E1202 10:37:58.856947 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168\": container with ID starting with d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168 not found: ID does not exist" containerID="d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.856984 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168"} err="failed to get container status \"d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168\": rpc error: code = NotFound desc = could not find container \"d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168\": container with ID starting with d05be530df1266e97698dcb47f7ce6831962b5d4bca17dac0943d5e3c8d0f168 not found: ID does not exist" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.857010 4813 scope.go:117] "RemoveContainer" containerID="f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1" Dec 02 10:37:58 crc kubenswrapper[4813]: E1202 10:37:58.857388 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1\": container with ID starting with f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1 not found: ID does not exist" containerID="f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.857415 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1"} err="failed to get container status \"f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1\": rpc error: code = NotFound desc = could not find container \"f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1\": container with ID starting with f3a256308d83ff007f68fbc94e4ad540cac0febd36258347b2fcdda35be8f2b1 not found: ID does not exist" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.857428 4813 scope.go:117] "RemoveContainer" containerID="e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a" Dec 02 10:37:58 crc kubenswrapper[4813]: E1202 10:37:58.857737 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a\": container with ID starting with e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a not found: ID does not exist" containerID="e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.857761 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a"} err="failed to get container status \"e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a\": rpc error: code = NotFound desc = could not find container \"e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a\": container with ID starting with e6c74fa332d71f66df1aee2eba67eb725111b5cacaaeb1fe079703a58102eb8a not found: ID does not exist" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.872808 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-run-httpd\") pod \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.872881 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dslhl\" (UniqueName: \"kubernetes.io/projected/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-kube-api-access-dslhl\") pod \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.872957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-scripts\") pod \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.873235 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-config-data\") pod \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.873319 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-ceilometer-tls-certs\") pod \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.873406 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-log-httpd\") pod \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.873427 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-sg-core-conf-yaml\") pod \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.873470 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" (UID: "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.873505 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-combined-ca-bundle\") pod \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\" (UID: \"45eb3740-3b6a-4dc7-b484-ce9fe4924ea8\") " Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.873940 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" (UID: "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.874738 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.874762 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.887295 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-scripts" (OuterVolumeSpecName: "scripts") pod "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" (UID: "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.887391 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-kube-api-access-dslhl" (OuterVolumeSpecName: "kube-api-access-dslhl") pod "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" (UID: "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8"). InnerVolumeSpecName "kube-api-access-dslhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.909256 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" (UID: "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.924940 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" (UID: "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.957607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" (UID: "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.976341 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.976375 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dslhl\" (UniqueName: \"kubernetes.io/projected/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-kube-api-access-dslhl\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.976388 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.976400 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.976409 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:58 crc kubenswrapper[4813]: I1202 10:37:58.978309 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-config-data" (OuterVolumeSpecName: "config-data") pod "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" (UID: "45eb3740-3b6a-4dc7-b484-ce9fe4924ea8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.077996 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.751798 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.799707 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.816917 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827158 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:59 crc kubenswrapper[4813]: E1202 10:37:59.827603 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="ceilometer-notification-agent" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827619 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="ceilometer-notification-agent" Dec 02 10:37:59 crc kubenswrapper[4813]: E1202 10:37:59.827640 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="sg-core" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827646 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="sg-core" Dec 02 10:37:59 crc kubenswrapper[4813]: E1202 10:37:59.827671 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="ceilometer-central-agent" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827678 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="ceilometer-central-agent" Dec 02 10:37:59 crc kubenswrapper[4813]: E1202 10:37:59.827688 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="proxy-httpd" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827694 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="proxy-httpd" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827865 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="sg-core" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827879 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="ceilometer-notification-agent" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827890 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="proxy-httpd" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.827906 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" containerName="ceilometer-central-agent" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.829871 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.832637 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.832676 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.832681 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.839005 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.993559 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-run-httpd\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.993648 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-config-data\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.993702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj78r\" (UniqueName: \"kubernetes.io/projected/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-kube-api-access-nj78r\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.993942 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.993989 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-scripts\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.994011 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-log-httpd\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.994134 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:37:59 crc kubenswrapper[4813]: I1202 10:37:59.994196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.086196 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45eb3740-3b6a-4dc7-b484-ce9fe4924ea8" path="/var/lib/kubelet/pods/45eb3740-3b6a-4dc7-b484-ce9fe4924ea8/volumes" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-run-httpd\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096289 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-config-data\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096322 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj78r\" (UniqueName: \"kubernetes.io/projected/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-kube-api-access-nj78r\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096395 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-scripts\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096480 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-log-httpd\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096580 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.096772 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-run-httpd\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.097006 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-log-httpd\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.103292 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.103445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.104473 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.104756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-scripts\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.105970 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-config-data\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.115018 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj78r\" (UniqueName: \"kubernetes.io/projected/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-kube-api-access-nj78r\") pod \"ceilometer-0\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.147601 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.573692 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 10:38:00 crc kubenswrapper[4813]: I1202 10:38:00.767839 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerStarted","Data":"965a4c7de37836743455ec967128991a4350894c9f9564d3a4b7ed0a96c557ed"} Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.068238 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:38:01 crc kubenswrapper[4813]: E1202 10:38:01.068969 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.262158 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.427544 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-logs\") pod \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.427917 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-config-data\") pod \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.428144 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggtx7\" (UniqueName: \"kubernetes.io/projected/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-kube-api-access-ggtx7\") pod \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.428251 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-combined-ca-bundle\") pod \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\" (UID: \"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8\") " Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.429756 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-logs" (OuterVolumeSpecName: "logs") pod "6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" (UID: "6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.446987 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-kube-api-access-ggtx7" (OuterVolumeSpecName: "kube-api-access-ggtx7") pod "6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" (UID: "6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8"). InnerVolumeSpecName "kube-api-access-ggtx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.472636 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-config-data" (OuterVolumeSpecName: "config-data") pod "6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" (UID: "6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.477095 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" (UID: "6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.530033 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.530321 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.530331 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggtx7\" (UniqueName: \"kubernetes.io/projected/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-kube-api-access-ggtx7\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.530340 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.778193 4813 generic.go:334] "Generic (PLEG): container finished" podID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerID="56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77" exitCode=0 Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.778316 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.778369 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8","Type":"ContainerDied","Data":"56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77"} Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.778719 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8","Type":"ContainerDied","Data":"16fd93942f8d7e2dbef847fb3118763b17ff2d95297023ad4777af05dfed75b9"} Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.778750 4813 scope.go:117] "RemoveContainer" containerID="56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.786061 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerStarted","Data":"ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a"} Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.817133 4813 scope.go:117] "RemoveContainer" containerID="b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.828029 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.844517 4813 scope.go:117] "RemoveContainer" containerID="56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.845190 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:01 crc kubenswrapper[4813]: E1202 10:38:01.845366 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77\": container with ID starting with 56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77 not found: ID does not exist" containerID="56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.845479 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77"} err="failed to get container status \"56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77\": rpc error: code = NotFound desc = could not find container \"56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77\": container with ID starting with 56840169e39d4b48c79c48ef888a0b5fba1db47df10c2807866e717aab6e8b77 not found: ID does not exist" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.845559 4813 scope.go:117] "RemoveContainer" containerID="b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a" Dec 02 10:38:01 crc kubenswrapper[4813]: E1202 10:38:01.845999 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a\": container with ID starting with b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a not found: ID does not exist" containerID="b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.846027 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a"} err="failed to get container status \"b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a\": rpc error: code = NotFound desc = could not find container \"b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a\": container with ID starting with b1484f1f0ee504738102df34eea8216d33a4e08d2a0bee54e8609eb426acf07a not found: ID does not exist" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.855354 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:01 crc kubenswrapper[4813]: E1202 10:38:01.856035 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-log" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.856197 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-log" Dec 02 10:38:01 crc kubenswrapper[4813]: E1202 10:38:01.856283 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-api" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.856350 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-api" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.856602 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-log" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.856666 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" containerName="nova-api-api" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.857924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.860158 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.860702 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.860870 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 02 10:38:01 crc kubenswrapper[4813]: I1202 10:38:01.866125 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.039268 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.039340 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.039372 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-config-data\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.039395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be76aa9-a556-4e86-bfb6-447619914729-logs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.039450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-public-tls-certs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.039473 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqn6\" (UniqueName: \"kubernetes.io/projected/7be76aa9-a556-4e86-bfb6-447619914729-kube-api-access-8bqn6\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.078237 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8" path="/var/lib/kubelet/pods/6d6a7fe2-7bb8-4605-99a8-b46dd10aaef8/volumes" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.141047 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-config-data\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.141197 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be76aa9-a556-4e86-bfb6-447619914729-logs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.141316 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-public-tls-certs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.141363 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqn6\" (UniqueName: \"kubernetes.io/projected/7be76aa9-a556-4e86-bfb6-447619914729-kube-api-access-8bqn6\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.141484 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.141552 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.141677 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be76aa9-a556-4e86-bfb6-447619914729-logs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.147416 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-config-data\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.147884 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.150038 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.152596 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-public-tls-certs\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.167469 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqn6\" (UniqueName: \"kubernetes.io/projected/7be76aa9-a556-4e86-bfb6-447619914729-kube-api-access-8bqn6\") pod \"nova-api-0\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.177780 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:38:02 crc kubenswrapper[4813]: W1202 10:38:02.637800 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be76aa9_a556_4e86_bfb6_447619914729.slice/crio-b559972a661d1c6269e70ab792497207c6caf4b9e9455f0eecb9e6e5baeb91b9 WatchSource:0}: Error finding container b559972a661d1c6269e70ab792497207c6caf4b9e9455f0eecb9e6e5baeb91b9: Status 404 returned error can't find the container with id b559972a661d1c6269e70ab792497207c6caf4b9e9455f0eecb9e6e5baeb91b9 Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.663366 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:02 crc kubenswrapper[4813]: I1202 10:38:02.797989 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7be76aa9-a556-4e86-bfb6-447619914729","Type":"ContainerStarted","Data":"b559972a661d1c6269e70ab792497207c6caf4b9e9455f0eecb9e6e5baeb91b9"} Dec 02 10:38:03 crc kubenswrapper[4813]: I1202 10:38:03.022224 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:38:03 crc kubenswrapper[4813]: I1202 10:38:03.041411 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:38:03 crc kubenswrapper[4813]: I1202 10:38:03.810763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7be76aa9-a556-4e86-bfb6-447619914729","Type":"ContainerStarted","Data":"82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f"} Dec 02 10:38:03 crc kubenswrapper[4813]: I1202 10:38:03.811135 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7be76aa9-a556-4e86-bfb6-447619914729","Type":"ContainerStarted","Data":"12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b"} Dec 02 10:38:03 crc kubenswrapper[4813]: I1202 10:38:03.815038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerStarted","Data":"2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c"} Dec 02 10:38:03 crc kubenswrapper[4813]: I1202 10:38:03.838900 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 02 10:38:03 crc kubenswrapper[4813]: I1202 10:38:03.839493 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.839472711 podStartE2EDuration="2.839472711s" podCreationTimestamp="2025-12-02 10:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:38:03.827365744 +0000 UTC m=+1808.022540046" watchObservedRunningTime="2025-12-02 10:38:03.839472711 +0000 UTC m=+1808.034647033" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.001391 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sj6rf"] Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.003202 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.009581 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.010687 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.014601 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sj6rf"] Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.191898 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-config-data\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.191966 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-scripts\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.191992 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdfs\" (UniqueName: \"kubernetes.io/projected/3f99d60d-1399-4a43-9711-01a53a17257a-kube-api-access-7zdfs\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.192017 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.293852 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-config-data\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.293943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-scripts\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.293980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdfs\" (UniqueName: \"kubernetes.io/projected/3f99d60d-1399-4a43-9711-01a53a17257a-kube-api-access-7zdfs\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.294001 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.299554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-scripts\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.300682 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-config-data\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.301577 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.319231 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdfs\" (UniqueName: \"kubernetes.io/projected/3f99d60d-1399-4a43-9711-01a53a17257a-kube-api-access-7zdfs\") pod \"nova-cell1-cell-mapping-sj6rf\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.327115 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.778805 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sj6rf"] Dec 02 10:38:04 crc kubenswrapper[4813]: W1202 10:38:04.786847 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f99d60d_1399_4a43_9711_01a53a17257a.slice/crio-6b8a4f72cc4a8d574e2811b551d47d083e5656e7cea71f55eb5ffec46d0ae302 WatchSource:0}: Error finding container 6b8a4f72cc4a8d574e2811b551d47d083e5656e7cea71f55eb5ffec46d0ae302: Status 404 returned error can't find the container with id 6b8a4f72cc4a8d574e2811b551d47d083e5656e7cea71f55eb5ffec46d0ae302 Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.825266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerStarted","Data":"cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7"} Dec 02 10:38:04 crc kubenswrapper[4813]: I1202 10:38:04.826504 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sj6rf" event={"ID":"3f99d60d-1399-4a43-9711-01a53a17257a","Type":"ContainerStarted","Data":"6b8a4f72cc4a8d574e2811b551d47d083e5656e7cea71f55eb5ffec46d0ae302"} Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.180873 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.242585 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-57n8s"] Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.242865 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" podUID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" containerName="dnsmasq-dns" containerID="cri-o://f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02" gracePeriod=10 Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.792292 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.836120 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sj6rf" event={"ID":"3f99d60d-1399-4a43-9711-01a53a17257a","Type":"ContainerStarted","Data":"2ccebba788d3fd3ed6e792f77b123dee7ca1dbc3825a5fd36e979145b1a27552"} Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.841746 4813 generic.go:334] "Generic (PLEG): container finished" podID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" containerID="f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02" exitCode=0 Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.841789 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" event={"ID":"7fd1b4ae-380f-4b1a-9ead-df3407a2814d","Type":"ContainerDied","Data":"f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02"} Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.841814 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" event={"ID":"7fd1b4ae-380f-4b1a-9ead-df3407a2814d","Type":"ContainerDied","Data":"a30a547382cd43c2da260d0833b025459e4c2cc68b40ee8fbb9b5f1cebc210e1"} Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.841831 4813 scope.go:117] "RemoveContainer" containerID="f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.841943 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-57n8s" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.857293 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sj6rf" podStartSLOduration=2.8572759899999998 podStartE2EDuration="2.85727599s" podCreationTimestamp="2025-12-02 10:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:38:05.851885126 +0000 UTC m=+1810.047059428" watchObservedRunningTime="2025-12-02 10:38:05.85727599 +0000 UTC m=+1810.052450292" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.921713 4813 scope.go:117] "RemoveContainer" containerID="926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.924481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-sb\") pod \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.924656 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-nb\") pod \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.924692 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-dns-svc\") pod \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.924754 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz4hf\" (UniqueName: \"kubernetes.io/projected/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-kube-api-access-bz4hf\") pod \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.925650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-config\") pod \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\" (UID: \"7fd1b4ae-380f-4b1a-9ead-df3407a2814d\") " Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.939503 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-kube-api-access-bz4hf" (OuterVolumeSpecName: "kube-api-access-bz4hf") pod "7fd1b4ae-380f-4b1a-9ead-df3407a2814d" (UID: "7fd1b4ae-380f-4b1a-9ead-df3407a2814d"). InnerVolumeSpecName "kube-api-access-bz4hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.955278 4813 scope.go:117] "RemoveContainer" containerID="f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02" Dec 02 10:38:05 crc kubenswrapper[4813]: E1202 10:38:05.955686 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02\": container with ID starting with f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02 not found: ID does not exist" containerID="f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.955722 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02"} err="failed to get container status \"f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02\": rpc error: code = NotFound desc = could not find container \"f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02\": container with ID starting with f2324624aa33cfc0ae2399398c9fdfe9519c460b758833a23c0e3c31c3b8ef02 not found: ID does not exist" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.955746 4813 scope.go:117] "RemoveContainer" containerID="926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b" Dec 02 10:38:05 crc kubenswrapper[4813]: E1202 10:38:05.956129 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b\": container with ID starting with 926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b not found: ID does not exist" containerID="926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.956155 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b"} err="failed to get container status \"926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b\": rpc error: code = NotFound desc = could not find container \"926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b\": container with ID starting with 926a6d70921078c5838bff2c60714b450d1e2f572228e9cb6f8aa53043231c2b not found: ID does not exist" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.970259 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7fd1b4ae-380f-4b1a-9ead-df3407a2814d" (UID: "7fd1b4ae-380f-4b1a-9ead-df3407a2814d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.979699 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-config" (OuterVolumeSpecName: "config") pod "7fd1b4ae-380f-4b1a-9ead-df3407a2814d" (UID: "7fd1b4ae-380f-4b1a-9ead-df3407a2814d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.984652 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fd1b4ae-380f-4b1a-9ead-df3407a2814d" (UID: "7fd1b4ae-380f-4b1a-9ead-df3407a2814d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:38:05 crc kubenswrapper[4813]: I1202 10:38:05.987849 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7fd1b4ae-380f-4b1a-9ead-df3407a2814d" (UID: "7fd1b4ae-380f-4b1a-9ead-df3407a2814d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.027042 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.027081 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.027091 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.027100 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz4hf\" (UniqueName: \"kubernetes.io/projected/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-kube-api-access-bz4hf\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.027111 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fd1b4ae-380f-4b1a-9ead-df3407a2814d-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.170943 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-57n8s"] Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.180575 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-57n8s"] Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.854110 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerStarted","Data":"b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538"} Dec 02 10:38:06 crc kubenswrapper[4813]: I1202 10:38:06.880012 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.850821004 podStartE2EDuration="7.879994525s" podCreationTimestamp="2025-12-02 10:37:59 +0000 UTC" firstStartedPulling="2025-12-02 10:38:00.593005509 +0000 UTC m=+1804.788179811" lastFinishedPulling="2025-12-02 10:38:05.62217903 +0000 UTC m=+1809.817353332" observedRunningTime="2025-12-02 10:38:06.874309153 +0000 UTC m=+1811.069483455" watchObservedRunningTime="2025-12-02 10:38:06.879994525 +0000 UTC m=+1811.075168817" Dec 02 10:38:07 crc kubenswrapper[4813]: I1202 10:38:07.864109 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 10:38:08 crc kubenswrapper[4813]: I1202 10:38:08.079209 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" path="/var/lib/kubelet/pods/7fd1b4ae-380f-4b1a-9ead-df3407a2814d/volumes" Dec 02 10:38:10 crc kubenswrapper[4813]: I1202 10:38:10.897226 4813 generic.go:334] "Generic (PLEG): container finished" podID="3f99d60d-1399-4a43-9711-01a53a17257a" containerID="2ccebba788d3fd3ed6e792f77b123dee7ca1dbc3825a5fd36e979145b1a27552" exitCode=0 Dec 02 10:38:10 crc kubenswrapper[4813]: I1202 10:38:10.897306 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sj6rf" event={"ID":"3f99d60d-1399-4a43-9711-01a53a17257a","Type":"ContainerDied","Data":"2ccebba788d3fd3ed6e792f77b123dee7ca1dbc3825a5fd36e979145b1a27552"} Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.178239 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.178888 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.181964 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.335679 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdfs\" (UniqueName: \"kubernetes.io/projected/3f99d60d-1399-4a43-9711-01a53a17257a-kube-api-access-7zdfs\") pod \"3f99d60d-1399-4a43-9711-01a53a17257a\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.335746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-scripts\") pod \"3f99d60d-1399-4a43-9711-01a53a17257a\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.335841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-combined-ca-bundle\") pod \"3f99d60d-1399-4a43-9711-01a53a17257a\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.336064 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-config-data\") pod \"3f99d60d-1399-4a43-9711-01a53a17257a\" (UID: \"3f99d60d-1399-4a43-9711-01a53a17257a\") " Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.340599 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f99d60d-1399-4a43-9711-01a53a17257a-kube-api-access-7zdfs" (OuterVolumeSpecName: "kube-api-access-7zdfs") pod "3f99d60d-1399-4a43-9711-01a53a17257a" (UID: "3f99d60d-1399-4a43-9711-01a53a17257a"). InnerVolumeSpecName "kube-api-access-7zdfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.341249 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-scripts" (OuterVolumeSpecName: "scripts") pod "3f99d60d-1399-4a43-9711-01a53a17257a" (UID: "3f99d60d-1399-4a43-9711-01a53a17257a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.360203 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-config-data" (OuterVolumeSpecName: "config-data") pod "3f99d60d-1399-4a43-9711-01a53a17257a" (UID: "3f99d60d-1399-4a43-9711-01a53a17257a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.379059 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f99d60d-1399-4a43-9711-01a53a17257a" (UID: "3f99d60d-1399-4a43-9711-01a53a17257a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.437671 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.437705 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdfs\" (UniqueName: \"kubernetes.io/projected/3f99d60d-1399-4a43-9711-01a53a17257a-kube-api-access-7zdfs\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.437716 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.437725 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f99d60d-1399-4a43-9711-01a53a17257a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.916769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sj6rf" event={"ID":"3f99d60d-1399-4a43-9711-01a53a17257a","Type":"ContainerDied","Data":"6b8a4f72cc4a8d574e2811b551d47d083e5656e7cea71f55eb5ffec46d0ae302"} Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.916812 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b8a4f72cc4a8d574e2811b551d47d083e5656e7cea71f55eb5ffec46d0ae302" Dec 02 10:38:12 crc kubenswrapper[4813]: I1202 10:38:12.916849 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sj6rf" Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.068360 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.115988 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.116212 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-log" containerID="cri-o://12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b" gracePeriod=30 Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.116269 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-api" containerID="cri-o://82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f" gracePeriod=30 Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.123792 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": EOF" Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.123925 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.188:8774/\": EOF" Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.155151 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.155466 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fd1cea75-2a71-4ef2-b4e4-2072f825dc10" containerName="nova-scheduler-scheduler" containerID="cri-o://5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c" gracePeriod=30 Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.171704 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.172135 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-log" containerID="cri-o://7a8ca8ad5c1cdcec9606c7ec949d73d3d668e55205fc6e6f620b6385f5907110" gracePeriod=30 Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.172404 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-metadata" containerID="cri-o://db0d4ed82c2c133f26e05b0ac71fa1b871ea8bc01e11c40a6d389d1b330865b9" gracePeriod=30 Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.934233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"2fed071f8239a2c52d08d08e010c11558e2670d682506b388f82e7786b9072db"} Dec 02 10:38:13 crc kubenswrapper[4813]: E1202 10:38:13.938904 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.943763 4813 generic.go:334] "Generic (PLEG): container finished" podID="7be76aa9-a556-4e86-bfb6-447619914729" containerID="12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b" exitCode=143 Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.943886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7be76aa9-a556-4e86-bfb6-447619914729","Type":"ContainerDied","Data":"12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b"} Dec 02 10:38:13 crc kubenswrapper[4813]: E1202 10:38:13.946080 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:38:13 crc kubenswrapper[4813]: E1202 10:38:13.948328 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 10:38:13 crc kubenswrapper[4813]: E1202 10:38:13.948407 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fd1cea75-2a71-4ef2-b4e4-2072f825dc10" containerName="nova-scheduler-scheduler" Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.949732 4813 generic.go:334] "Generic (PLEG): container finished" podID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerID="7a8ca8ad5c1cdcec9606c7ec949d73d3d668e55205fc6e6f620b6385f5907110" exitCode=143 Dec 02 10:38:13 crc kubenswrapper[4813]: I1202 10:38:13.949925 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a804fb5-4ce7-42e5-beb8-d301ece0f571","Type":"ContainerDied","Data":"7a8ca8ad5c1cdcec9606c7ec949d73d3d668e55205fc6e6f620b6385f5907110"} Dec 02 10:38:16 crc kubenswrapper[4813]: I1202 10:38:16.297609 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": read tcp 10.217.0.2:34300->10.217.0.180:8775: read: connection reset by peer" Dec 02 10:38:16 crc kubenswrapper[4813]: I1202 10:38:16.297648 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.180:8775/\": read tcp 10.217.0.2:34284->10.217.0.180:8775: read: connection reset by peer" Dec 02 10:38:16 crc kubenswrapper[4813]: I1202 10:38:16.979792 4813 generic.go:334] "Generic (PLEG): container finished" podID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerID="db0d4ed82c2c133f26e05b0ac71fa1b871ea8bc01e11c40a6d389d1b330865b9" exitCode=0 Dec 02 10:38:16 crc kubenswrapper[4813]: I1202 10:38:16.979903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a804fb5-4ce7-42e5-beb8-d301ece0f571","Type":"ContainerDied","Data":"db0d4ed82c2c133f26e05b0ac71fa1b871ea8bc01e11c40a6d389d1b330865b9"} Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.229443 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.316376 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fblvr\" (UniqueName: \"kubernetes.io/projected/7a804fb5-4ce7-42e5-beb8-d301ece0f571-kube-api-access-fblvr\") pod \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.316426 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a804fb5-4ce7-42e5-beb8-d301ece0f571-logs\") pod \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.316458 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-nova-metadata-tls-certs\") pod \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.316529 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-combined-ca-bundle\") pod \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.316575 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-config-data\") pod \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\" (UID: \"7a804fb5-4ce7-42e5-beb8-d301ece0f571\") " Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.317036 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a804fb5-4ce7-42e5-beb8-d301ece0f571-logs" (OuterVolumeSpecName: "logs") pod "7a804fb5-4ce7-42e5-beb8-d301ece0f571" (UID: "7a804fb5-4ce7-42e5-beb8-d301ece0f571"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.323866 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a804fb5-4ce7-42e5-beb8-d301ece0f571-kube-api-access-fblvr" (OuterVolumeSpecName: "kube-api-access-fblvr") pod "7a804fb5-4ce7-42e5-beb8-d301ece0f571" (UID: "7a804fb5-4ce7-42e5-beb8-d301ece0f571"). InnerVolumeSpecName "kube-api-access-fblvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.346879 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-config-data" (OuterVolumeSpecName: "config-data") pod "7a804fb5-4ce7-42e5-beb8-d301ece0f571" (UID: "7a804fb5-4ce7-42e5-beb8-d301ece0f571"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.355154 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a804fb5-4ce7-42e5-beb8-d301ece0f571" (UID: "7a804fb5-4ce7-42e5-beb8-d301ece0f571"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.372767 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7a804fb5-4ce7-42e5-beb8-d301ece0f571" (UID: "7a804fb5-4ce7-42e5-beb8-d301ece0f571"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.418306 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fblvr\" (UniqueName: \"kubernetes.io/projected/7a804fb5-4ce7-42e5-beb8-d301ece0f571-kube-api-access-fblvr\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.418737 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a804fb5-4ce7-42e5-beb8-d301ece0f571-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.418753 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.418765 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.418776 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a804fb5-4ce7-42e5-beb8-d301ece0f571-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.972193 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.992096 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a804fb5-4ce7-42e5-beb8-d301ece0f571","Type":"ContainerDied","Data":"2787a6f1cf3527c2da4117798799e6ffd684ba9b72cc0b2611d0cbc6f932c08b"} Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.992148 4813 scope.go:117] "RemoveContainer" containerID="db0d4ed82c2c133f26e05b0ac71fa1b871ea8bc01e11c40a6d389d1b330865b9" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.992280 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.999056 4813 generic.go:334] "Generic (PLEG): container finished" podID="fd1cea75-2a71-4ef2-b4e4-2072f825dc10" containerID="5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c" exitCode=0 Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.999157 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd1cea75-2a71-4ef2-b4e4-2072f825dc10","Type":"ContainerDied","Data":"5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c"} Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.999178 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd1cea75-2a71-4ef2-b4e4-2072f825dc10","Type":"ContainerDied","Data":"32e378d75c4c91a303f854fcb079c6c3ce24151c7a79818db7f70639b546a13a"} Dec 02 10:38:17 crc kubenswrapper[4813]: I1202 10:38:17.999225 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.027017 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.038113 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.045712 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.046044 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1cea75-2a71-4ef2-b4e4-2072f825dc10" containerName="nova-scheduler-scheduler" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046060 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1cea75-2a71-4ef2-b4e4-2072f825dc10" containerName="nova-scheduler-scheduler" Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.046085 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" containerName="init" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046091 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" containerName="init" Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.046106 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-metadata" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046112 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-metadata" Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.046129 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f99d60d-1399-4a43-9711-01a53a17257a" containerName="nova-manage" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046135 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f99d60d-1399-4a43-9711-01a53a17257a" containerName="nova-manage" Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.046147 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-log" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046153 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-log" Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.046169 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" containerName="dnsmasq-dns" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046174 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" containerName="dnsmasq-dns" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046319 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f99d60d-1399-4a43-9711-01a53a17257a" containerName="nova-manage" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046329 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-log" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046342 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" containerName="nova-metadata-metadata" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046352 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd1cea75-2a71-4ef2-b4e4-2072f825dc10" containerName="nova-scheduler-scheduler" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046365 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd1b4ae-380f-4b1a-9ead-df3407a2814d" containerName="dnsmasq-dns" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.046913 4813 scope.go:117] "RemoveContainer" containerID="7a8ca8ad5c1cdcec9606c7ec949d73d3d668e55205fc6e6f620b6385f5907110" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.047246 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.050533 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.050730 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.057132 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.070120 4813 scope.go:117] "RemoveContainer" containerID="5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.081059 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a804fb5-4ce7-42e5-beb8-d301ece0f571" path="/var/lib/kubelet/pods/7a804fb5-4ce7-42e5-beb8-d301ece0f571/volumes" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.092304 4813 scope.go:117] "RemoveContainer" containerID="5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c" Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.093869 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c\": container with ID starting with 5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c not found: ID does not exist" containerID="5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.093908 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c"} err="failed to get container status \"5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c\": rpc error: code = NotFound desc = could not find container \"5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c\": container with ID starting with 5019d365951ce957219e56666ad6c0e82e57fdd2832901fa2ba1d0235a6eab1c not found: ID does not exist" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.135124 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-config-data\") pod \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.135414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwrf\" (UniqueName: \"kubernetes.io/projected/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-kube-api-access-kmwrf\") pod \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.136392 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle\") pod \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.136918 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-config-data\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.137216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.137411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hhx\" (UniqueName: \"kubernetes.io/projected/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-kube-api-access-k9hhx\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.137526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-logs\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.137865 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.141971 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-kube-api-access-kmwrf" (OuterVolumeSpecName: "kube-api-access-kmwrf") pod "fd1cea75-2a71-4ef2-b4e4-2072f825dc10" (UID: "fd1cea75-2a71-4ef2-b4e4-2072f825dc10"). InnerVolumeSpecName "kube-api-access-kmwrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.160562 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle podName:fd1cea75-2a71-4ef2-b4e4-2072f825dc10 nodeName:}" failed. No retries permitted until 2025-12-02 10:38:18.660535685 +0000 UTC m=+1822.855709987 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle") pod "fd1cea75-2a71-4ef2-b4e4-2072f825dc10" (UID: "fd1cea75-2a71-4ef2-b4e4-2072f825dc10") : error deleting /var/lib/kubelet/pods/fd1cea75-2a71-4ef2-b4e4-2072f825dc10/volume-subpaths: remove /var/lib/kubelet/pods/fd1cea75-2a71-4ef2-b4e4-2072f825dc10/volume-subpaths: no such file or directory Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.163374 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-config-data" (OuterVolumeSpecName: "config-data") pod "fd1cea75-2a71-4ef2-b4e4-2072f825dc10" (UID: "fd1cea75-2a71-4ef2-b4e4-2072f825dc10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.239397 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9hhx\" (UniqueName: \"kubernetes.io/projected/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-kube-api-access-k9hhx\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.239464 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-logs\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.239560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.239607 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-config-data\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.239686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.239752 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.239766 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwrf\" (UniqueName: \"kubernetes.io/projected/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-kube-api-access-kmwrf\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.240642 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-logs\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.243190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.243959 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-config-data\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.244805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.259737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9hhx\" (UniqueName: \"kubernetes.io/projected/0a79c68c-68f6-4ea8-9752-3f5710f6f29b-kube-api-access-k9hhx\") pod \"nova-metadata-0\" (UID: \"0a79c68c-68f6-4ea8-9752-3f5710f6f29b\") " pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.366834 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.749608 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle\") pod \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\" (UID: \"fd1cea75-2a71-4ef2-b4e4-2072f825dc10\") " Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.755131 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd1cea75-2a71-4ef2-b4e4-2072f825dc10" (UID: "fd1cea75-2a71-4ef2-b4e4-2072f825dc10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.813607 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.853294 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cea75-2a71-4ef2-b4e4-2072f825dc10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.913790 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.938284 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.948985 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.958751 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.959224 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-api" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.959248 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-api" Dec 02 10:38:18 crc kubenswrapper[4813]: E1202 10:38:18.959288 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-log" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.959306 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-log" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.959522 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-log" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.959554 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be76aa9-a556-4e86-bfb6-447619914729" containerName="nova-api-api" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.960306 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.962194 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 10:38:18 crc kubenswrapper[4813]: I1202 10:38:18.977337 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.009212 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a79c68c-68f6-4ea8-9752-3f5710f6f29b","Type":"ContainerStarted","Data":"5194f6cc72cd1497f7445cf513641b67d918736bade5653cd4a6b83f5e1896b3"} Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.009539 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a79c68c-68f6-4ea8-9752-3f5710f6f29b","Type":"ContainerStarted","Data":"7e1414342fb17448da9e54c429f0fc41cba6c342e9b194e5f349e3d0a3e1b64d"} Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.011292 4813 generic.go:334] "Generic (PLEG): container finished" podID="7be76aa9-a556-4e86-bfb6-447619914729" containerID="82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f" exitCode=0 Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.011333 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.011374 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7be76aa9-a556-4e86-bfb6-447619914729","Type":"ContainerDied","Data":"82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f"} Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.011408 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7be76aa9-a556-4e86-bfb6-447619914729","Type":"ContainerDied","Data":"b559972a661d1c6269e70ab792497207c6caf4b9e9455f0eecb9e6e5baeb91b9"} Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.011425 4813 scope.go:117] "RemoveContainer" containerID="82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.029611 4813 scope.go:117] "RemoveContainer" containerID="12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.052405 4813 scope.go:117] "RemoveContainer" containerID="82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f" Dec 02 10:38:19 crc kubenswrapper[4813]: E1202 10:38:19.052847 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f\": container with ID starting with 82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f not found: ID does not exist" containerID="82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.052926 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f"} err="failed to get container status \"82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f\": rpc error: code = NotFound desc = could not find container \"82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f\": container with ID starting with 82d597a78d7afa11ad9f4dd7747600506e0d5b09066cde5f6d554e56df11361f not found: ID does not exist" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.052993 4813 scope.go:117] "RemoveContainer" containerID="12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b" Dec 02 10:38:19 crc kubenswrapper[4813]: E1202 10:38:19.053918 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b\": container with ID starting with 12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b not found: ID does not exist" containerID="12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.053965 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b"} err="failed to get container status \"12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b\": rpc error: code = NotFound desc = could not find container \"12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b\": container with ID starting with 12d30c827d628c32ae5317c3b407a0e663479fa45cdd1fa086e5a997b50f683b not found: ID does not exist" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.056654 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-internal-tls-certs\") pod \"7be76aa9-a556-4e86-bfb6-447619914729\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.056769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bqn6\" (UniqueName: \"kubernetes.io/projected/7be76aa9-a556-4e86-bfb6-447619914729-kube-api-access-8bqn6\") pod \"7be76aa9-a556-4e86-bfb6-447619914729\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.056823 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-public-tls-certs\") pod \"7be76aa9-a556-4e86-bfb6-447619914729\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.056907 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-combined-ca-bundle\") pod \"7be76aa9-a556-4e86-bfb6-447619914729\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.057075 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be76aa9-a556-4e86-bfb6-447619914729-logs\") pod \"7be76aa9-a556-4e86-bfb6-447619914729\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.057122 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-config-data\") pod \"7be76aa9-a556-4e86-bfb6-447619914729\" (UID: \"7be76aa9-a556-4e86-bfb6-447619914729\") " Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.057491 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f3e7d5-190b-4014-9357-a26896d27248-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.057637 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f3e7d5-190b-4014-9357-a26896d27248-config-data\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.057776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjrfp\" (UniqueName: \"kubernetes.io/projected/26f3e7d5-190b-4014-9357-a26896d27248-kube-api-access-hjrfp\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.057794 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be76aa9-a556-4e86-bfb6-447619914729-logs" (OuterVolumeSpecName: "logs") pod "7be76aa9-a556-4e86-bfb6-447619914729" (UID: "7be76aa9-a556-4e86-bfb6-447619914729"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.061054 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be76aa9-a556-4e86-bfb6-447619914729-kube-api-access-8bqn6" (OuterVolumeSpecName: "kube-api-access-8bqn6") pod "7be76aa9-a556-4e86-bfb6-447619914729" (UID: "7be76aa9-a556-4e86-bfb6-447619914729"). InnerVolumeSpecName "kube-api-access-8bqn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.082945 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-config-data" (OuterVolumeSpecName: "config-data") pod "7be76aa9-a556-4e86-bfb6-447619914729" (UID: "7be76aa9-a556-4e86-bfb6-447619914729"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.091467 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7be76aa9-a556-4e86-bfb6-447619914729" (UID: "7be76aa9-a556-4e86-bfb6-447619914729"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.110992 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7be76aa9-a556-4e86-bfb6-447619914729" (UID: "7be76aa9-a556-4e86-bfb6-447619914729"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.119150 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7be76aa9-a556-4e86-bfb6-447619914729" (UID: "7be76aa9-a556-4e86-bfb6-447619914729"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.159625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f3e7d5-190b-4014-9357-a26896d27248-config-data\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.159691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjrfp\" (UniqueName: \"kubernetes.io/projected/26f3e7d5-190b-4014-9357-a26896d27248-kube-api-access-hjrfp\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.159804 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f3e7d5-190b-4014-9357-a26896d27248-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.159990 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be76aa9-a556-4e86-bfb6-447619914729-logs\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.160013 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.160029 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.160045 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bqn6\" (UniqueName: \"kubernetes.io/projected/7be76aa9-a556-4e86-bfb6-447619914729-kube-api-access-8bqn6\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.160057 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.160068 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be76aa9-a556-4e86-bfb6-447619914729-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.165049 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f3e7d5-190b-4014-9357-a26896d27248-config-data\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.165197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f3e7d5-190b-4014-9357-a26896d27248-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.177967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjrfp\" (UniqueName: \"kubernetes.io/projected/26f3e7d5-190b-4014-9357-a26896d27248-kube-api-access-hjrfp\") pod \"nova-scheduler-0\" (UID: \"26f3e7d5-190b-4014-9357-a26896d27248\") " pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.288035 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.352639 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.368921 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.377754 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.379399 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.382066 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.382402 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.382573 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.387117 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.467151 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3982e534-d3ba-4867-8e51-2576155575e0-logs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.467592 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-config-data\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.467662 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.467726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt4rv\" (UniqueName: \"kubernetes.io/projected/3982e534-d3ba-4867-8e51-2576155575e0-kube-api-access-rt4rv\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.467756 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.467830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.569125 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3982e534-d3ba-4867-8e51-2576155575e0-logs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.569213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-config-data\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.569242 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.569279 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt4rv\" (UniqueName: \"kubernetes.io/projected/3982e534-d3ba-4867-8e51-2576155575e0-kube-api-access-rt4rv\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.569305 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.569346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.569808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3982e534-d3ba-4867-8e51-2576155575e0-logs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.577191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-config-data\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.578627 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.583648 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-public-tls-certs\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.588212 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3982e534-d3ba-4867-8e51-2576155575e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.594122 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt4rv\" (UniqueName: \"kubernetes.io/projected/3982e534-d3ba-4867-8e51-2576155575e0-kube-api-access-rt4rv\") pod \"nova-api-0\" (UID: \"3982e534-d3ba-4867-8e51-2576155575e0\") " pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.712973 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 10:38:19 crc kubenswrapper[4813]: I1202 10:38:19.783468 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 10:38:20 crc kubenswrapper[4813]: I1202 10:38:20.016546 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 10:38:20 crc kubenswrapper[4813]: W1202 10:38:20.027080 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3982e534_d3ba_4867_8e51_2576155575e0.slice/crio-b96848b1a9ab15b5d569bdf2aaa67787f379cc018cd3403a304bfb8139d3d28b WatchSource:0}: Error finding container b96848b1a9ab15b5d569bdf2aaa67787f379cc018cd3403a304bfb8139d3d28b: Status 404 returned error can't find the container with id b96848b1a9ab15b5d569bdf2aaa67787f379cc018cd3403a304bfb8139d3d28b Dec 02 10:38:20 crc kubenswrapper[4813]: I1202 10:38:20.033454 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a79c68c-68f6-4ea8-9752-3f5710f6f29b","Type":"ContainerStarted","Data":"a6dff2929019fe8ffe8ecb4ccb813fe72e89cd1636e306cbb9041218a2bfa579"} Dec 02 10:38:20 crc kubenswrapper[4813]: I1202 10:38:20.042521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26f3e7d5-190b-4014-9357-a26896d27248","Type":"ContainerStarted","Data":"b5b954a31dd5b7169c449651ce02a7b5fe4c8628b162b7e9d17b97363a61acf6"} Dec 02 10:38:20 crc kubenswrapper[4813]: I1202 10:38:20.042561 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26f3e7d5-190b-4014-9357-a26896d27248","Type":"ContainerStarted","Data":"7363d93ce92bb2bacc0b5cea4a788a9913500fb5be1fc8bcd89e52eab3e7b636"} Dec 02 10:38:20 crc kubenswrapper[4813]: I1202 10:38:20.062272 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.062253991 podStartE2EDuration="2.062253991s" podCreationTimestamp="2025-12-02 10:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:38:20.055530858 +0000 UTC m=+1824.250705160" watchObservedRunningTime="2025-12-02 10:38:20.062253991 +0000 UTC m=+1824.257428293" Dec 02 10:38:20 crc kubenswrapper[4813]: I1202 10:38:20.085909 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be76aa9-a556-4e86-bfb6-447619914729" path="/var/lib/kubelet/pods/7be76aa9-a556-4e86-bfb6-447619914729/volumes" Dec 02 10:38:20 crc kubenswrapper[4813]: I1202 10:38:20.086823 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd1cea75-2a71-4ef2-b4e4-2072f825dc10" path="/var/lib/kubelet/pods/fd1cea75-2a71-4ef2-b4e4-2072f825dc10/volumes" Dec 02 10:38:21 crc kubenswrapper[4813]: I1202 10:38:21.053996 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3982e534-d3ba-4867-8e51-2576155575e0","Type":"ContainerStarted","Data":"212fef544c893f17fd8ea794c98a7314abc01dad774fd6c31efe0098f08c1f4c"} Dec 02 10:38:21 crc kubenswrapper[4813]: I1202 10:38:21.054056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3982e534-d3ba-4867-8e51-2576155575e0","Type":"ContainerStarted","Data":"fdf59a143abb1707ced70fb68ff3f342c968971d5172a5eab62d9d06f1ef8cf7"} Dec 02 10:38:21 crc kubenswrapper[4813]: I1202 10:38:21.054094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3982e534-d3ba-4867-8e51-2576155575e0","Type":"ContainerStarted","Data":"b96848b1a9ab15b5d569bdf2aaa67787f379cc018cd3403a304bfb8139d3d28b"} Dec 02 10:38:21 crc kubenswrapper[4813]: I1202 10:38:21.089946 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.089928862 podStartE2EDuration="2.089928862s" podCreationTimestamp="2025-12-02 10:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:38:21.082318044 +0000 UTC m=+1825.277492346" watchObservedRunningTime="2025-12-02 10:38:21.089928862 +0000 UTC m=+1825.285103164" Dec 02 10:38:21 crc kubenswrapper[4813]: I1202 10:38:21.090750 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.090746376 podStartE2EDuration="3.090746376s" podCreationTimestamp="2025-12-02 10:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:38:20.079379082 +0000 UTC m=+1824.274553404" watchObservedRunningTime="2025-12-02 10:38:21.090746376 +0000 UTC m=+1825.285920678" Dec 02 10:38:23 crc kubenswrapper[4813]: I1202 10:38:23.367492 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:38:23 crc kubenswrapper[4813]: I1202 10:38:23.367537 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 10:38:24 crc kubenswrapper[4813]: I1202 10:38:24.288308 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 10:38:28 crc kubenswrapper[4813]: I1202 10:38:28.368121 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 10:38:28 crc kubenswrapper[4813]: I1202 10:38:28.368743 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 10:38:29 crc kubenswrapper[4813]: I1202 10:38:29.288771 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 10:38:29 crc kubenswrapper[4813]: I1202 10:38:29.316195 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 10:38:29 crc kubenswrapper[4813]: I1202 10:38:29.380259 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0a79c68c-68f6-4ea8-9752-3f5710f6f29b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:38:29 crc kubenswrapper[4813]: I1202 10:38:29.380403 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0a79c68c-68f6-4ea8-9752-3f5710f6f29b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:38:29 crc kubenswrapper[4813]: I1202 10:38:29.713587 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:38:29 crc kubenswrapper[4813]: I1202 10:38:29.714744 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 10:38:30 crc kubenswrapper[4813]: I1202 10:38:30.158580 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 10:38:30 crc kubenswrapper[4813]: I1202 10:38:30.195507 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 10:38:30 crc kubenswrapper[4813]: I1202 10:38:30.726239 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3982e534-d3ba-4867-8e51-2576155575e0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.192:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:38:30 crc kubenswrapper[4813]: I1202 10:38:30.726288 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3982e534-d3ba-4867-8e51-2576155575e0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.192:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 10:38:38 crc kubenswrapper[4813]: I1202 10:38:38.374173 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 10:38:38 crc kubenswrapper[4813]: I1202 10:38:38.376783 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 10:38:38 crc kubenswrapper[4813]: I1202 10:38:38.384322 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 10:38:39 crc kubenswrapper[4813]: I1202 10:38:39.223253 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 10:38:39 crc kubenswrapper[4813]: I1202 10:38:39.719754 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 10:38:39 crc kubenswrapper[4813]: I1202 10:38:39.720219 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 10:38:39 crc kubenswrapper[4813]: I1202 10:38:39.720304 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 10:38:39 crc kubenswrapper[4813]: I1202 10:38:39.728645 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 10:38:40 crc kubenswrapper[4813]: I1202 10:38:40.225648 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 10:38:40 crc kubenswrapper[4813]: I1202 10:38:40.231410 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 10:38:48 crc kubenswrapper[4813]: I1202 10:38:48.639564 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:38:49 crc kubenswrapper[4813]: I1202 10:38:49.836674 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:38:52 crc kubenswrapper[4813]: I1202 10:38:52.572094 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="250ea07a-903e-418f-adf4-0e720a9807f6" containerName="rabbitmq" containerID="cri-o://e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93" gracePeriod=604797 Dec 02 10:38:53 crc kubenswrapper[4813]: I1202 10:38:53.818256 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerName="rabbitmq" containerID="cri-o://139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5" gracePeriod=604797 Dec 02 10:38:54 crc kubenswrapper[4813]: I1202 10:38:54.680278 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="250ea07a-903e-418f-adf4-0e720a9807f6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Dec 02 10:38:55 crc kubenswrapper[4813]: I1202 10:38:55.022004 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.215231 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.394700 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-plugins\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395096 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/250ea07a-903e-418f-adf4-0e720a9807f6-erlang-cookie-secret\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395185 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-server-conf\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395246 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-erlang-cookie\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395273 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-confd\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395326 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs8t8\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-kube-api-access-rs8t8\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395404 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-tls\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395436 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-plugins-conf\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395465 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/250ea07a-903e-418f-adf4-0e720a9807f6-pod-info\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395494 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.395549 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-config-data\") pod \"250ea07a-903e-418f-adf4-0e720a9807f6\" (UID: \"250ea07a-903e-418f-adf4-0e720a9807f6\") " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.396001 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.399397 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.400563 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.403101 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.404291 4813 generic.go:334] "Generic (PLEG): container finished" podID="250ea07a-903e-418f-adf4-0e720a9807f6" containerID="e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93" exitCode=0 Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.404328 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"250ea07a-903e-418f-adf4-0e720a9807f6","Type":"ContainerDied","Data":"e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93"} Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.404354 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"250ea07a-903e-418f-adf4-0e720a9807f6","Type":"ContainerDied","Data":"8107a6ecf06831ebcc5300f2e50edc8ab9421f203e433d441e787a883664fec4"} Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.404370 4813 scope.go:117] "RemoveContainer" containerID="e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.404511 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.404552 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.407408 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/250ea07a-903e-418f-adf4-0e720a9807f6-pod-info" (OuterVolumeSpecName: "pod-info") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.412554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-kube-api-access-rs8t8" (OuterVolumeSpecName: "kube-api-access-rs8t8") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "kube-api-access-rs8t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.414118 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250ea07a-903e-418f-adf4-0e720a9807f6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.433148 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-config-data" (OuterVolumeSpecName: "config-data") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.465554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-server-conf" (OuterVolumeSpecName: "server-conf") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499442 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499504 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs8t8\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-kube-api-access-rs8t8\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499519 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499557 4813 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499568 4813 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/250ea07a-903e-418f-adf4-0e720a9807f6-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499604 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499642 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499655 4813 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/250ea07a-903e-418f-adf4-0e720a9807f6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.499668 4813 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/250ea07a-903e-418f-adf4-0e720a9807f6-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.507476 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "250ea07a-903e-418f-adf4-0e720a9807f6" (UID: "250ea07a-903e-418f-adf4-0e720a9807f6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.518393 4813 scope.go:117] "RemoveContainer" containerID="bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.527421 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.544299 4813 scope.go:117] "RemoveContainer" containerID="e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93" Dec 02 10:38:59 crc kubenswrapper[4813]: E1202 10:38:59.545047 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93\": container with ID starting with e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93 not found: ID does not exist" containerID="e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.545127 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93"} err="failed to get container status \"e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93\": rpc error: code = NotFound desc = could not find container \"e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93\": container with ID starting with e41ec12aa301bdff9206f251a9a5d99616dddf4d25e3186b69c96caa4f261c93 not found: ID does not exist" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.545151 4813 scope.go:117] "RemoveContainer" containerID="bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79" Dec 02 10:38:59 crc kubenswrapper[4813]: E1202 10:38:59.545593 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79\": container with ID starting with bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79 not found: ID does not exist" containerID="bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.545653 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79"} err="failed to get container status \"bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79\": rpc error: code = NotFound desc = could not find container \"bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79\": container with ID starting with bcd90db40996ce7b3edb036ab093aab42e21bddd4d7436767e014060dd32dd79 not found: ID does not exist" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.601338 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/250ea07a-903e-418f-adf4-0e720a9807f6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.601382 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.752430 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.757040 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.779876 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:38:59 crc kubenswrapper[4813]: E1202 10:38:59.780704 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250ea07a-903e-418f-adf4-0e720a9807f6" containerName="setup-container" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.780721 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="250ea07a-903e-418f-adf4-0e720a9807f6" containerName="setup-container" Dec 02 10:38:59 crc kubenswrapper[4813]: E1202 10:38:59.780749 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250ea07a-903e-418f-adf4-0e720a9807f6" containerName="rabbitmq" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.780755 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="250ea07a-903e-418f-adf4-0e720a9807f6" containerName="rabbitmq" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.780929 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="250ea07a-903e-418f-adf4-0e720a9807f6" containerName="rabbitmq" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.781932 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.785917 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qhxq6" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.786038 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.786131 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.786346 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.786470 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.786540 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.786619 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.792782 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.906565 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l668t\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-kube-api-access-l668t\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.907264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.907370 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.907531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.907681 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.907805 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.907910 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.908040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.908257 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.908392 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:38:59 crc kubenswrapper[4813]: I1202 10:38:59.908494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.010311 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.010549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.010676 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.010818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.010903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011005 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011105 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011640 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011045 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.011779 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l668t\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-kube-api-access-l668t\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.012370 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.012932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.013165 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.014507 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.015355 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.017234 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.020795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.028616 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l668t\" (UniqueName: \"kubernetes.io/projected/5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e-kube-api-access-l668t\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.043267 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e\") " pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.080061 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250ea07a-903e-418f-adf4-0e720a9807f6" path="/var/lib/kubelet/pods/250ea07a-903e-418f-adf4-0e720a9807f6/volumes" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.110747 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.365622 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.425809 4813 generic.go:334] "Generic (PLEG): container finished" podID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerID="139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5" exitCode=0 Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.425897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa","Type":"ContainerDied","Data":"139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5"} Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.425924 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa","Type":"ContainerDied","Data":"865846d0f1a546f24cb4dff38abe7970c5549ce7d36ec884be3c6b8e8141070b"} Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.425941 4813 scope.go:117] "RemoveContainer" containerID="139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.426087 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.448392 4813 scope.go:117] "RemoveContainer" containerID="6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.466295 4813 scope.go:117] "RemoveContainer" containerID="139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5" Dec 02 10:39:00 crc kubenswrapper[4813]: E1202 10:39:00.467327 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5\": container with ID starting with 139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5 not found: ID does not exist" containerID="139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.467359 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5"} err="failed to get container status \"139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5\": rpc error: code = NotFound desc = could not find container \"139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5\": container with ID starting with 139791d80d00a7738dc8525ef0ded72eb95ca05a24c695d57bf2ea8fd7285ff5 not found: ID does not exist" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.467380 4813 scope.go:117] "RemoveContainer" containerID="6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5" Dec 02 10:39:00 crc kubenswrapper[4813]: E1202 10:39:00.467800 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5\": container with ID starting with 6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5 not found: ID does not exist" containerID="6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.467817 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5"} err="failed to get container status \"6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5\": rpc error: code = NotFound desc = could not find container \"6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5\": container with ID starting with 6a3c7a8e92776f3ab3d6a07410758ed6650331293069210cbb6bca307f2030a5 not found: ID does not exist" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519163 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-plugins\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519221 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-erlang-cookie-secret\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519291 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-erlang-cookie\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519318 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-pod-info\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519364 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-confd\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519397 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26kst\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-kube-api-access-26kst\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-plugins-conf\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519503 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-tls\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519547 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-server-conf\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519571 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-config-data\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519598 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\" (UID: \"715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa\") " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.519938 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.520430 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.522556 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.522652 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.524816 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.525265 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.526776 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.527596 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-kube-api-access-26kst" (OuterVolumeSpecName: "kube-api-access-26kst") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "kube-api-access-26kst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.528216 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-pod-info" (OuterVolumeSpecName: "pod-info") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.549837 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-config-data" (OuterVolumeSpecName: "config-data") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.571791 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-server-conf" (OuterVolumeSpecName: "server-conf") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.579954 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 10:39:00 crc kubenswrapper[4813]: W1202 10:39:00.583189 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f73dc68_cdbb_43ae_a9ab_0a07fc36ba8e.slice/crio-7754eb27e1263b82da143ccc33c275dc4a570b1f2c00a1d2a1a2a5082b8e9239 WatchSource:0}: Error finding container 7754eb27e1263b82da143ccc33c275dc4a570b1f2c00a1d2a1a2a5082b8e9239: Status 404 returned error can't find the container with id 7754eb27e1263b82da143ccc33c275dc4a570b1f2c00a1d2a1a2a5082b8e9239 Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621794 4813 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621831 4813 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621843 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26kst\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-kube-api-access-26kst\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621854 4813 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621867 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621877 4813 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621886 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621920 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.621934 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.625835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" (UID: "715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.641405 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.723450 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.723497 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.759869 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.768317 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.783987 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:39:00 crc kubenswrapper[4813]: E1202 10:39:00.784346 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerName="setup-container" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.784363 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerName="setup-container" Dec 02 10:39:00 crc kubenswrapper[4813]: E1202 10:39:00.784384 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerName="rabbitmq" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.784391 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerName="rabbitmq" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.784560 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" containerName="rabbitmq" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.785648 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.787403 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.789388 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.789501 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.789580 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.791560 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.791719 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.797227 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xbdbr" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.812971 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926301 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2541bb4e-08b2-43e0-8142-81f9af449133-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926385 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926424 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926519 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926569 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926593 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926646 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2541bb4e-08b2-43e0-8142-81f9af449133-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926787 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx95c\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-kube-api-access-jx95c\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:00 crc kubenswrapper[4813]: I1202 10:39:00.926859 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.028714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.028791 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.028823 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2541bb4e-08b2-43e0-8142-81f9af449133-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.028843 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.028876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.028912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.029018 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.028990 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.029242 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.029292 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.029301 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.029380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2541bb4e-08b2-43e0-8142-81f9af449133-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.029392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.029421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx95c\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-kube-api-access-jx95c\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.029981 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.030037 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.030347 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2541bb4e-08b2-43e0-8142-81f9af449133-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.033558 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2541bb4e-08b2-43e0-8142-81f9af449133-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.033977 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2541bb4e-08b2-43e0-8142-81f9af449133-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.034026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.035285 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.048620 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx95c\" (UniqueName: \"kubernetes.io/projected/2541bb4e-08b2-43e0-8142-81f9af449133-kube-api-access-jx95c\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.058691 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2541bb4e-08b2-43e0-8142-81f9af449133\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.104384 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.439800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e","Type":"ContainerStarted","Data":"7754eb27e1263b82da143ccc33c275dc4a570b1f2c00a1d2a1a2a5082b8e9239"} Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.553369 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 10:39:01 crc kubenswrapper[4813]: W1202 10:39:01.556783 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2541bb4e_08b2_43e0_8142_81f9af449133.slice/crio-ea1566f8cfbdc5502661cfd3a1fab281da5ae8c8192cb8d776383473221bc052 WatchSource:0}: Error finding container ea1566f8cfbdc5502661cfd3a1fab281da5ae8c8192cb8d776383473221bc052: Status 404 returned error can't find the container with id ea1566f8cfbdc5502661cfd3a1fab281da5ae8c8192cb8d776383473221bc052 Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.756228 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-kjxkc"] Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.758395 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.760417 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.767148 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-kjxkc"] Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.841263 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj9sq\" (UniqueName: \"kubernetes.io/projected/32dc83c5-aca4-4483-92af-628f631dbff4-kube-api-access-tj9sq\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.841360 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.841620 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-config\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.841714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.841768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.841825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.943381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.943488 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj9sq\" (UniqueName: \"kubernetes.io/projected/32dc83c5-aca4-4483-92af-628f631dbff4-kube-api-access-tj9sq\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.943538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.943634 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-config\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.943668 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.943692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.944725 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.944747 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.944749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.944797 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-config\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.945269 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:01 crc kubenswrapper[4813]: I1202 10:39:01.961697 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj9sq\" (UniqueName: \"kubernetes.io/projected/32dc83c5-aca4-4483-92af-628f631dbff4-kube-api-access-tj9sq\") pod \"dnsmasq-dns-6447ccbd8f-kjxkc\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:02 crc kubenswrapper[4813]: I1202 10:39:02.075690 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:02 crc kubenswrapper[4813]: I1202 10:39:02.077965 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa" path="/var/lib/kubelet/pods/715c592f-b9e6-4226-9c5f-a3a6f2ccc6fa/volumes" Dec 02 10:39:02 crc kubenswrapper[4813]: I1202 10:39:02.458591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e","Type":"ContainerStarted","Data":"faccb9d8ee99b8c96519dcf82da10e20579c142997da5679d796d531fc9addd5"} Dec 02 10:39:02 crc kubenswrapper[4813]: I1202 10:39:02.459848 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2541bb4e-08b2-43e0-8142-81f9af449133","Type":"ContainerStarted","Data":"ea1566f8cfbdc5502661cfd3a1fab281da5ae8c8192cb8d776383473221bc052"} Dec 02 10:39:02 crc kubenswrapper[4813]: I1202 10:39:02.506294 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-kjxkc"] Dec 02 10:39:02 crc kubenswrapper[4813]: W1202 10:39:02.517012 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32dc83c5_aca4_4483_92af_628f631dbff4.slice/crio-5729a4c615e132c2068699456570b8a894796c8ad34911368665c0f5c7226358 WatchSource:0}: Error finding container 5729a4c615e132c2068699456570b8a894796c8ad34911368665c0f5c7226358: Status 404 returned error can't find the container with id 5729a4c615e132c2068699456570b8a894796c8ad34911368665c0f5c7226358 Dec 02 10:39:03 crc kubenswrapper[4813]: I1202 10:39:03.471849 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2541bb4e-08b2-43e0-8142-81f9af449133","Type":"ContainerStarted","Data":"403dee25b30badc43c1ceedf82d38d7aec28b2c9d89246c6224f4e5062f2aa87"} Dec 02 10:39:03 crc kubenswrapper[4813]: I1202 10:39:03.475334 4813 generic.go:334] "Generic (PLEG): container finished" podID="32dc83c5-aca4-4483-92af-628f631dbff4" containerID="9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148" exitCode=0 Dec 02 10:39:03 crc kubenswrapper[4813]: I1202 10:39:03.475746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" event={"ID":"32dc83c5-aca4-4483-92af-628f631dbff4","Type":"ContainerDied","Data":"9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148"} Dec 02 10:39:03 crc kubenswrapper[4813]: I1202 10:39:03.475826 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" event={"ID":"32dc83c5-aca4-4483-92af-628f631dbff4","Type":"ContainerStarted","Data":"5729a4c615e132c2068699456570b8a894796c8ad34911368665c0f5c7226358"} Dec 02 10:39:04 crc kubenswrapper[4813]: I1202 10:39:04.488955 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" event={"ID":"32dc83c5-aca4-4483-92af-628f631dbff4","Type":"ContainerStarted","Data":"95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f"} Dec 02 10:39:04 crc kubenswrapper[4813]: I1202 10:39:04.508905 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" podStartSLOduration=3.5088840919999997 podStartE2EDuration="3.508884092s" podCreationTimestamp="2025-12-02 10:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:39:04.505400272 +0000 UTC m=+1868.700574624" watchObservedRunningTime="2025-12-02 10:39:04.508884092 +0000 UTC m=+1868.704058394" Dec 02 10:39:05 crc kubenswrapper[4813]: I1202 10:39:05.501974 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.078064 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.130362 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zmknr"] Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.131206 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" podUID="621e5b06-b0c0-4563-a599-e03cc944cccc" containerName="dnsmasq-dns" containerID="cri-o://a3a5a4c80dff66b1926bcfce11c24da4ecf3bb46bb522653617495a8e48d7dce" gracePeriod=10 Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.295745 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-fs65h"] Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.298482 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.313567 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-fs65h"] Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.453252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.453325 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvczh\" (UniqueName: \"kubernetes.io/projected/888bdb9c-7436-47d1-b240-71ceb68bd6f1-kube-api-access-mvczh\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.453393 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-config\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.453509 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.453554 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.453903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.555421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.555461 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.555543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.555614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.555657 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvczh\" (UniqueName: \"kubernetes.io/projected/888bdb9c-7436-47d1-b240-71ceb68bd6f1-kube-api-access-mvczh\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.555695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-config\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.556530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.556661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.556706 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-config\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.556766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.556830 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.565359 4813 generic.go:334] "Generic (PLEG): container finished" podID="621e5b06-b0c0-4563-a599-e03cc944cccc" containerID="a3a5a4c80dff66b1926bcfce11c24da4ecf3bb46bb522653617495a8e48d7dce" exitCode=0 Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.565399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" event={"ID":"621e5b06-b0c0-4563-a599-e03cc944cccc","Type":"ContainerDied","Data":"a3a5a4c80dff66b1926bcfce11c24da4ecf3bb46bb522653617495a8e48d7dce"} Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.565425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" event={"ID":"621e5b06-b0c0-4563-a599-e03cc944cccc","Type":"ContainerDied","Data":"f2251588473adf163f55377b81fa8788c4734863b4e79646ad93f1c7d66f107d"} Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.565435 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2251588473adf163f55377b81fa8788c4734863b4e79646ad93f1c7d66f107d" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.575430 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvczh\" (UniqueName: \"kubernetes.io/projected/888bdb9c-7436-47d1-b240-71ceb68bd6f1-kube-api-access-mvczh\") pod \"dnsmasq-dns-864d5fc68c-fs65h\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.621790 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.629615 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.759256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pzxr\" (UniqueName: \"kubernetes.io/projected/621e5b06-b0c0-4563-a599-e03cc944cccc-kube-api-access-2pzxr\") pod \"621e5b06-b0c0-4563-a599-e03cc944cccc\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.759330 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-sb\") pod \"621e5b06-b0c0-4563-a599-e03cc944cccc\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.759372 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-nb\") pod \"621e5b06-b0c0-4563-a599-e03cc944cccc\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.759457 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-dns-svc\") pod \"621e5b06-b0c0-4563-a599-e03cc944cccc\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.759505 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-config\") pod \"621e5b06-b0c0-4563-a599-e03cc944cccc\" (UID: \"621e5b06-b0c0-4563-a599-e03cc944cccc\") " Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.765596 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621e5b06-b0c0-4563-a599-e03cc944cccc-kube-api-access-2pzxr" (OuterVolumeSpecName: "kube-api-access-2pzxr") pod "621e5b06-b0c0-4563-a599-e03cc944cccc" (UID: "621e5b06-b0c0-4563-a599-e03cc944cccc"). InnerVolumeSpecName "kube-api-access-2pzxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.808551 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "621e5b06-b0c0-4563-a599-e03cc944cccc" (UID: "621e5b06-b0c0-4563-a599-e03cc944cccc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.817587 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "621e5b06-b0c0-4563-a599-e03cc944cccc" (UID: "621e5b06-b0c0-4563-a599-e03cc944cccc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.823413 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-config" (OuterVolumeSpecName: "config") pod "621e5b06-b0c0-4563-a599-e03cc944cccc" (UID: "621e5b06-b0c0-4563-a599-e03cc944cccc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.829270 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "621e5b06-b0c0-4563-a599-e03cc944cccc" (UID: "621e5b06-b0c0-4563-a599-e03cc944cccc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.862270 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.862302 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.862315 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pzxr\" (UniqueName: \"kubernetes.io/projected/621e5b06-b0c0-4563-a599-e03cc944cccc-kube-api-access-2pzxr\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.862330 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:12 crc kubenswrapper[4813]: I1202 10:39:12.862343 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/621e5b06-b0c0-4563-a599-e03cc944cccc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:13 crc kubenswrapper[4813]: I1202 10:39:13.098869 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-fs65h"] Dec 02 10:39:13 crc kubenswrapper[4813]: I1202 10:39:13.576320 4813 generic.go:334] "Generic (PLEG): container finished" podID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerID="1caaae268d9f24e4e4bc653fa4240222be3620b33e9e407537337c67fe92df01" exitCode=0 Dec 02 10:39:13 crc kubenswrapper[4813]: I1202 10:39:13.576553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" event={"ID":"888bdb9c-7436-47d1-b240-71ceb68bd6f1","Type":"ContainerDied","Data":"1caaae268d9f24e4e4bc653fa4240222be3620b33e9e407537337c67fe92df01"} Dec 02 10:39:13 crc kubenswrapper[4813]: I1202 10:39:13.576602 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" event={"ID":"888bdb9c-7436-47d1-b240-71ceb68bd6f1","Type":"ContainerStarted","Data":"20c8b5df90118f876bdbfcb32add85da17f51e3a6c43bd52df1a8b43d532c0f4"} Dec 02 10:39:13 crc kubenswrapper[4813]: I1202 10:39:13.576621 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-zmknr" Dec 02 10:39:13 crc kubenswrapper[4813]: I1202 10:39:13.817928 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zmknr"] Dec 02 10:39:13 crc kubenswrapper[4813]: I1202 10:39:13.825985 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-zmknr"] Dec 02 10:39:14 crc kubenswrapper[4813]: I1202 10:39:14.077665 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621e5b06-b0c0-4563-a599-e03cc944cccc" path="/var/lib/kubelet/pods/621e5b06-b0c0-4563-a599-e03cc944cccc/volumes" Dec 02 10:39:14 crc kubenswrapper[4813]: I1202 10:39:14.586693 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" event={"ID":"888bdb9c-7436-47d1-b240-71ceb68bd6f1","Type":"ContainerStarted","Data":"131a5e3b3a7aa79aac0552e395e734a073fa34e3f6df6b152c22f73691f91d8d"} Dec 02 10:39:14 crc kubenswrapper[4813]: I1202 10:39:14.586859 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:14 crc kubenswrapper[4813]: I1202 10:39:14.606303 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" podStartSLOduration=2.606285737 podStartE2EDuration="2.606285737s" podCreationTimestamp="2025-12-02 10:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:39:14.604894557 +0000 UTC m=+1878.800068859" watchObservedRunningTime="2025-12-02 10:39:14.606285737 +0000 UTC m=+1878.801460039" Dec 02 10:39:22 crc kubenswrapper[4813]: I1202 10:39:22.624193 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 10:39:22 crc kubenswrapper[4813]: I1202 10:39:22.690965 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-kjxkc"] Dec 02 10:39:22 crc kubenswrapper[4813]: I1202 10:39:22.695665 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" podUID="32dc83c5-aca4-4483-92af-628f631dbff4" containerName="dnsmasq-dns" containerID="cri-o://95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f" gracePeriod=10 Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.139358 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.246985 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj9sq\" (UniqueName: \"kubernetes.io/projected/32dc83c5-aca4-4483-92af-628f631dbff4-kube-api-access-tj9sq\") pod \"32dc83c5-aca4-4483-92af-628f631dbff4\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.247054 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-openstack-edpm-ipam\") pod \"32dc83c5-aca4-4483-92af-628f631dbff4\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.247188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-dns-svc\") pod \"32dc83c5-aca4-4483-92af-628f631dbff4\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.247222 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-config\") pod \"32dc83c5-aca4-4483-92af-628f631dbff4\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.247247 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-nb\") pod \"32dc83c5-aca4-4483-92af-628f631dbff4\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.247274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-sb\") pod \"32dc83c5-aca4-4483-92af-628f631dbff4\" (UID: \"32dc83c5-aca4-4483-92af-628f631dbff4\") " Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.253561 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dc83c5-aca4-4483-92af-628f631dbff4-kube-api-access-tj9sq" (OuterVolumeSpecName: "kube-api-access-tj9sq") pod "32dc83c5-aca4-4483-92af-628f631dbff4" (UID: "32dc83c5-aca4-4483-92af-628f631dbff4"). InnerVolumeSpecName "kube-api-access-tj9sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.299107 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-config" (OuterVolumeSpecName: "config") pod "32dc83c5-aca4-4483-92af-628f631dbff4" (UID: "32dc83c5-aca4-4483-92af-628f631dbff4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.301332 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32dc83c5-aca4-4483-92af-628f631dbff4" (UID: "32dc83c5-aca4-4483-92af-628f631dbff4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.302219 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "32dc83c5-aca4-4483-92af-628f631dbff4" (UID: "32dc83c5-aca4-4483-92af-628f631dbff4"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.304851 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32dc83c5-aca4-4483-92af-628f631dbff4" (UID: "32dc83c5-aca4-4483-92af-628f631dbff4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.306111 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32dc83c5-aca4-4483-92af-628f631dbff4" (UID: "32dc83c5-aca4-4483-92af-628f631dbff4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.349182 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj9sq\" (UniqueName: \"kubernetes.io/projected/32dc83c5-aca4-4483-92af-628f631dbff4-kube-api-access-tj9sq\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.349213 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.349224 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.349232 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-config\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.349241 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.349249 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32dc83c5-aca4-4483-92af-628f631dbff4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.665228 4813 generic.go:334] "Generic (PLEG): container finished" podID="32dc83c5-aca4-4483-92af-628f631dbff4" containerID="95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f" exitCode=0 Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.665316 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.665349 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" event={"ID":"32dc83c5-aca4-4483-92af-628f631dbff4","Type":"ContainerDied","Data":"95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f"} Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.665727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-kjxkc" event={"ID":"32dc83c5-aca4-4483-92af-628f631dbff4","Type":"ContainerDied","Data":"5729a4c615e132c2068699456570b8a894796c8ad34911368665c0f5c7226358"} Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.665750 4813 scope.go:117] "RemoveContainer" containerID="95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.687419 4813 scope.go:117] "RemoveContainer" containerID="9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.705290 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-kjxkc"] Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.715705 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-kjxkc"] Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.719609 4813 scope.go:117] "RemoveContainer" containerID="95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f" Dec 02 10:39:23 crc kubenswrapper[4813]: E1202 10:39:23.720095 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f\": container with ID starting with 95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f not found: ID does not exist" containerID="95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.720139 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f"} err="failed to get container status \"95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f\": rpc error: code = NotFound desc = could not find container \"95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f\": container with ID starting with 95f0338da49ca01ff08d56823b84d38a446a703a5c2a1e3150d1b25b19d2916f not found: ID does not exist" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.720169 4813 scope.go:117] "RemoveContainer" containerID="9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148" Dec 02 10:39:23 crc kubenswrapper[4813]: E1202 10:39:23.720557 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148\": container with ID starting with 9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148 not found: ID does not exist" containerID="9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148" Dec 02 10:39:23 crc kubenswrapper[4813]: I1202 10:39:23.720591 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148"} err="failed to get container status \"9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148\": rpc error: code = NotFound desc = could not find container \"9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148\": container with ID starting with 9b082d38adcfbaef21748890c425adf7ec798f845e232a39307104e537435148 not found: ID does not exist" Dec 02 10:39:24 crc kubenswrapper[4813]: I1202 10:39:24.080368 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32dc83c5-aca4-4483-92af-628f631dbff4" path="/var/lib/kubelet/pods/32dc83c5-aca4-4483-92af-628f631dbff4/volumes" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.446137 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm"] Dec 02 10:39:28 crc kubenswrapper[4813]: E1202 10:39:28.447352 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dc83c5-aca4-4483-92af-628f631dbff4" containerName="init" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.447369 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dc83c5-aca4-4483-92af-628f631dbff4" containerName="init" Dec 02 10:39:28 crc kubenswrapper[4813]: E1202 10:39:28.447419 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621e5b06-b0c0-4563-a599-e03cc944cccc" containerName="dnsmasq-dns" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.447427 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="621e5b06-b0c0-4563-a599-e03cc944cccc" containerName="dnsmasq-dns" Dec 02 10:39:28 crc kubenswrapper[4813]: E1202 10:39:28.447443 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621e5b06-b0c0-4563-a599-e03cc944cccc" containerName="init" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.447451 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="621e5b06-b0c0-4563-a599-e03cc944cccc" containerName="init" Dec 02 10:39:28 crc kubenswrapper[4813]: E1202 10:39:28.447465 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dc83c5-aca4-4483-92af-628f631dbff4" containerName="dnsmasq-dns" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.447473 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dc83c5-aca4-4483-92af-628f631dbff4" containerName="dnsmasq-dns" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.447703 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="621e5b06-b0c0-4563-a599-e03cc944cccc" containerName="dnsmasq-dns" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.447727 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dc83c5-aca4-4483-92af-628f631dbff4" containerName="dnsmasq-dns" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.448623 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.450680 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.450746 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.451709 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.453728 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.462243 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm"] Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.541371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.541651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8xt\" (UniqueName: \"kubernetes.io/projected/192b9cbd-3f7c-4786-8028-60dd72662744-kube-api-access-dd8xt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.541874 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.541958 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.644127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.644197 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.644308 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.644426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8xt\" (UniqueName: \"kubernetes.io/projected/192b9cbd-3f7c-4786-8028-60dd72662744-kube-api-access-dd8xt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.652822 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.653826 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.656489 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.663180 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8xt\" (UniqueName: \"kubernetes.io/projected/192b9cbd-3f7c-4786-8028-60dd72662744-kube-api-access-dd8xt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:28 crc kubenswrapper[4813]: I1202 10:39:28.777124 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:29 crc kubenswrapper[4813]: I1202 10:39:29.246621 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm"] Dec 02 10:39:29 crc kubenswrapper[4813]: W1202 10:39:29.251418 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod192b9cbd_3f7c_4786_8028_60dd72662744.slice/crio-8bedd3ee11b66785c681e762513cec193213b70877e486a112a317746a93e905 WatchSource:0}: Error finding container 8bedd3ee11b66785c681e762513cec193213b70877e486a112a317746a93e905: Status 404 returned error can't find the container with id 8bedd3ee11b66785c681e762513cec193213b70877e486a112a317746a93e905 Dec 02 10:39:29 crc kubenswrapper[4813]: I1202 10:39:29.254326 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:39:29 crc kubenswrapper[4813]: I1202 10:39:29.733876 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" event={"ID":"192b9cbd-3f7c-4786-8028-60dd72662744","Type":"ContainerStarted","Data":"8bedd3ee11b66785c681e762513cec193213b70877e486a112a317746a93e905"} Dec 02 10:39:34 crc kubenswrapper[4813]: I1202 10:39:34.777173 4813 generic.go:334] "Generic (PLEG): container finished" podID="5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e" containerID="faccb9d8ee99b8c96519dcf82da10e20579c142997da5679d796d531fc9addd5" exitCode=0 Dec 02 10:39:34 crc kubenswrapper[4813]: I1202 10:39:34.777668 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e","Type":"ContainerDied","Data":"faccb9d8ee99b8c96519dcf82da10e20579c142997da5679d796d531fc9addd5"} Dec 02 10:39:36 crc kubenswrapper[4813]: I1202 10:39:36.796130 4813 generic.go:334] "Generic (PLEG): container finished" podID="2541bb4e-08b2-43e0-8142-81f9af449133" containerID="403dee25b30badc43c1ceedf82d38d7aec28b2c9d89246c6224f4e5062f2aa87" exitCode=0 Dec 02 10:39:36 crc kubenswrapper[4813]: I1202 10:39:36.796207 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2541bb4e-08b2-43e0-8142-81f9af449133","Type":"ContainerDied","Data":"403dee25b30badc43c1ceedf82d38d7aec28b2c9d89246c6224f4e5062f2aa87"} Dec 02 10:39:37 crc kubenswrapper[4813]: I1202 10:39:37.810168 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e","Type":"ContainerStarted","Data":"e94bc9b24ebef2c3935f05fa7f2095123f8cd0dad47de78a81090d761cdb1a83"} Dec 02 10:39:37 crc kubenswrapper[4813]: I1202 10:39:37.810847 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 10:39:37 crc kubenswrapper[4813]: I1202 10:39:37.812583 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" event={"ID":"192b9cbd-3f7c-4786-8028-60dd72662744","Type":"ContainerStarted","Data":"67c64dac86a433fc0845dd9c036e26a2e303f38fd962a13488a579e4182e4db8"} Dec 02 10:39:37 crc kubenswrapper[4813]: I1202 10:39:37.814370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2541bb4e-08b2-43e0-8142-81f9af449133","Type":"ContainerStarted","Data":"95da90438016bd2e0a9bb3c73389d72fe38a4eb3a1fad936be93b4d357343e5c"} Dec 02 10:39:37 crc kubenswrapper[4813]: I1202 10:39:37.814868 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:37 crc kubenswrapper[4813]: I1202 10:39:37.841678 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.841652282 podStartE2EDuration="38.841652282s" podCreationTimestamp="2025-12-02 10:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:39:37.83703394 +0000 UTC m=+1902.032208272" watchObservedRunningTime="2025-12-02 10:39:37.841652282 +0000 UTC m=+1902.036826594" Dec 02 10:39:37 crc kubenswrapper[4813]: I1202 10:39:37.862515 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" podStartSLOduration=1.680577245 podStartE2EDuration="9.86248652s" podCreationTimestamp="2025-12-02 10:39:28 +0000 UTC" firstStartedPulling="2025-12-02 10:39:29.254092225 +0000 UTC m=+1893.449266527" lastFinishedPulling="2025-12-02 10:39:37.43600149 +0000 UTC m=+1901.631175802" observedRunningTime="2025-12-02 10:39:37.85238135 +0000 UTC m=+1902.047555642" watchObservedRunningTime="2025-12-02 10:39:37.86248652 +0000 UTC m=+1902.057660822" Dec 02 10:39:37 crc kubenswrapper[4813]: I1202 10:39:37.880726 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.880693412 podStartE2EDuration="37.880693412s" podCreationTimestamp="2025-12-02 10:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:39:37.872219749 +0000 UTC m=+1902.067394081" watchObservedRunningTime="2025-12-02 10:39:37.880693412 +0000 UTC m=+1902.075867724" Dec 02 10:39:49 crc kubenswrapper[4813]: I1202 10:39:49.913664 4813 generic.go:334] "Generic (PLEG): container finished" podID="192b9cbd-3f7c-4786-8028-60dd72662744" containerID="67c64dac86a433fc0845dd9c036e26a2e303f38fd962a13488a579e4182e4db8" exitCode=0 Dec 02 10:39:49 crc kubenswrapper[4813]: I1202 10:39:49.913746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" event={"ID":"192b9cbd-3f7c-4786-8028-60dd72662744","Type":"ContainerDied","Data":"67c64dac86a433fc0845dd9c036e26a2e303f38fd962a13488a579e4182e4db8"} Dec 02 10:39:50 crc kubenswrapper[4813]: I1202 10:39:50.114287 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.108272 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.584969 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.756156 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-repo-setup-combined-ca-bundle\") pod \"192b9cbd-3f7c-4786-8028-60dd72662744\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.756363 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-ssh-key\") pod \"192b9cbd-3f7c-4786-8028-60dd72662744\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.756395 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-inventory\") pod \"192b9cbd-3f7c-4786-8028-60dd72662744\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.756425 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8xt\" (UniqueName: \"kubernetes.io/projected/192b9cbd-3f7c-4786-8028-60dd72662744-kube-api-access-dd8xt\") pod \"192b9cbd-3f7c-4786-8028-60dd72662744\" (UID: \"192b9cbd-3f7c-4786-8028-60dd72662744\") " Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.761722 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "192b9cbd-3f7c-4786-8028-60dd72662744" (UID: "192b9cbd-3f7c-4786-8028-60dd72662744"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.763267 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192b9cbd-3f7c-4786-8028-60dd72662744-kube-api-access-dd8xt" (OuterVolumeSpecName: "kube-api-access-dd8xt") pod "192b9cbd-3f7c-4786-8028-60dd72662744" (UID: "192b9cbd-3f7c-4786-8028-60dd72662744"). InnerVolumeSpecName "kube-api-access-dd8xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.789775 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "192b9cbd-3f7c-4786-8028-60dd72662744" (UID: "192b9cbd-3f7c-4786-8028-60dd72662744"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.790437 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-inventory" (OuterVolumeSpecName: "inventory") pod "192b9cbd-3f7c-4786-8028-60dd72662744" (UID: "192b9cbd-3f7c-4786-8028-60dd72662744"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.859206 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.859253 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.859268 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8xt\" (UniqueName: \"kubernetes.io/projected/192b9cbd-3f7c-4786-8028-60dd72662744-kube-api-access-dd8xt\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.859283 4813 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192b9cbd-3f7c-4786-8028-60dd72662744-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.932470 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" event={"ID":"192b9cbd-3f7c-4786-8028-60dd72662744","Type":"ContainerDied","Data":"8bedd3ee11b66785c681e762513cec193213b70877e486a112a317746a93e905"} Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.932504 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm" Dec 02 10:39:51 crc kubenswrapper[4813]: I1202 10:39:51.932511 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bedd3ee11b66785c681e762513cec193213b70877e486a112a317746a93e905" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.021833 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22"] Dec 02 10:39:52 crc kubenswrapper[4813]: E1202 10:39:52.022287 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192b9cbd-3f7c-4786-8028-60dd72662744" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.022306 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="192b9cbd-3f7c-4786-8028-60dd72662744" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.022476 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="192b9cbd-3f7c-4786-8028-60dd72662744" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.023088 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.025739 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.026050 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.026235 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.026453 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.031746 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22"] Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.163546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.163658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.163715 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zclz7\" (UniqueName: \"kubernetes.io/projected/baacde6b-2915-4fb8-a6db-793969a48c79-kube-api-access-zclz7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.163870 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.265917 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.266004 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.266062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zclz7\" (UniqueName: \"kubernetes.io/projected/baacde6b-2915-4fb8-a6db-793969a48c79-kube-api-access-zclz7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.266235 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.270299 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.270319 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.272192 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.285485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zclz7\" (UniqueName: \"kubernetes.io/projected/baacde6b-2915-4fb8-a6db-793969a48c79-kube-api-access-zclz7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.352697 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.859840 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22"] Dec 02 10:39:52 crc kubenswrapper[4813]: I1202 10:39:52.942021 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" event={"ID":"baacde6b-2915-4fb8-a6db-793969a48c79","Type":"ContainerStarted","Data":"48d0aab910e155f84fbb456be5d0113db6d5443ca22cefcfda71fc1ab7ebc803"} Dec 02 10:39:53 crc kubenswrapper[4813]: I1202 10:39:53.951720 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" event={"ID":"baacde6b-2915-4fb8-a6db-793969a48c79","Type":"ContainerStarted","Data":"013a10a8d810d9330165f55d3fce76dec51f3342f90d1104f1de7c96cfab7491"} Dec 02 10:39:53 crc kubenswrapper[4813]: I1202 10:39:53.975655 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" podStartSLOduration=2.405168822 podStartE2EDuration="2.975637872s" podCreationTimestamp="2025-12-02 10:39:51 +0000 UTC" firstStartedPulling="2025-12-02 10:39:52.871521508 +0000 UTC m=+1917.066695810" lastFinishedPulling="2025-12-02 10:39:53.441990558 +0000 UTC m=+1917.637164860" observedRunningTime="2025-12-02 10:39:53.97279656 +0000 UTC m=+1918.167970892" watchObservedRunningTime="2025-12-02 10:39:53.975637872 +0000 UTC m=+1918.170812174" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.017308 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vkvtl"] Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.028041 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.051737 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vkvtl"] Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.206383 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qbzg\" (UniqueName: \"kubernetes.io/projected/4a26683b-a0a8-4464-8972-1ba431ba510e-kube-api-access-9qbzg\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.206468 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-utilities\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.206663 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-catalog-content\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.273343 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.273410 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.309245 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qbzg\" (UniqueName: \"kubernetes.io/projected/4a26683b-a0a8-4464-8972-1ba431ba510e-kube-api-access-9qbzg\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.309351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-utilities\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.309498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-catalog-content\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.310257 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-catalog-content\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.310667 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-utilities\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.333918 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qbzg\" (UniqueName: \"kubernetes.io/projected/4a26683b-a0a8-4464-8972-1ba431ba510e-kube-api-access-9qbzg\") pod \"community-operators-vkvtl\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.349344 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:34 crc kubenswrapper[4813]: I1202 10:40:34.860019 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vkvtl"] Dec 02 10:40:35 crc kubenswrapper[4813]: I1202 10:40:35.299335 4813 generic.go:334] "Generic (PLEG): container finished" podID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerID="bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b" exitCode=0 Dec 02 10:40:35 crc kubenswrapper[4813]: I1202 10:40:35.299399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkvtl" event={"ID":"4a26683b-a0a8-4464-8972-1ba431ba510e","Type":"ContainerDied","Data":"bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b"} Dec 02 10:40:35 crc kubenswrapper[4813]: I1202 10:40:35.299669 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkvtl" event={"ID":"4a26683b-a0a8-4464-8972-1ba431ba510e","Type":"ContainerStarted","Data":"1639d4ecef5df4e0fc7e39fa42982a63d72dd8b65b6c183562652b5cbac7f7d4"} Dec 02 10:40:37 crc kubenswrapper[4813]: I1202 10:40:37.317697 4813 generic.go:334] "Generic (PLEG): container finished" podID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerID="381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0" exitCode=0 Dec 02 10:40:37 crc kubenswrapper[4813]: I1202 10:40:37.317795 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkvtl" event={"ID":"4a26683b-a0a8-4464-8972-1ba431ba510e","Type":"ContainerDied","Data":"381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0"} Dec 02 10:40:38 crc kubenswrapper[4813]: I1202 10:40:38.329354 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkvtl" event={"ID":"4a26683b-a0a8-4464-8972-1ba431ba510e","Type":"ContainerStarted","Data":"eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997"} Dec 02 10:40:38 crc kubenswrapper[4813]: I1202 10:40:38.349010 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vkvtl" podStartSLOduration=2.5995491 podStartE2EDuration="5.348990235s" podCreationTimestamp="2025-12-02 10:40:33 +0000 UTC" firstStartedPulling="2025-12-02 10:40:35.300945964 +0000 UTC m=+1959.496120276" lastFinishedPulling="2025-12-02 10:40:38.050387109 +0000 UTC m=+1962.245561411" observedRunningTime="2025-12-02 10:40:38.346836844 +0000 UTC m=+1962.542011156" watchObservedRunningTime="2025-12-02 10:40:38.348990235 +0000 UTC m=+1962.544164537" Dec 02 10:40:44 crc kubenswrapper[4813]: I1202 10:40:44.350156 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:44 crc kubenswrapper[4813]: I1202 10:40:44.351758 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:44 crc kubenswrapper[4813]: I1202 10:40:44.397348 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:45 crc kubenswrapper[4813]: I1202 10:40:45.434246 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:45 crc kubenswrapper[4813]: I1202 10:40:45.482363 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vkvtl"] Dec 02 10:40:47 crc kubenswrapper[4813]: I1202 10:40:47.400008 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vkvtl" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerName="registry-server" containerID="cri-o://eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997" gracePeriod=2 Dec 02 10:40:47 crc kubenswrapper[4813]: I1202 10:40:47.833160 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:47 crc kubenswrapper[4813]: I1202 10:40:47.944671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-utilities\") pod \"4a26683b-a0a8-4464-8972-1ba431ba510e\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " Dec 02 10:40:47 crc kubenswrapper[4813]: I1202 10:40:47.944777 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qbzg\" (UniqueName: \"kubernetes.io/projected/4a26683b-a0a8-4464-8972-1ba431ba510e-kube-api-access-9qbzg\") pod \"4a26683b-a0a8-4464-8972-1ba431ba510e\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " Dec 02 10:40:47 crc kubenswrapper[4813]: I1202 10:40:47.944865 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-catalog-content\") pod \"4a26683b-a0a8-4464-8972-1ba431ba510e\" (UID: \"4a26683b-a0a8-4464-8972-1ba431ba510e\") " Dec 02 10:40:47 crc kubenswrapper[4813]: I1202 10:40:47.945556 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-utilities" (OuterVolumeSpecName: "utilities") pod "4a26683b-a0a8-4464-8972-1ba431ba510e" (UID: "4a26683b-a0a8-4464-8972-1ba431ba510e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:40:47 crc kubenswrapper[4813]: I1202 10:40:47.945737 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:40:47 crc kubenswrapper[4813]: I1202 10:40:47.951691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a26683b-a0a8-4464-8972-1ba431ba510e-kube-api-access-9qbzg" (OuterVolumeSpecName: "kube-api-access-9qbzg") pod "4a26683b-a0a8-4464-8972-1ba431ba510e" (UID: "4a26683b-a0a8-4464-8972-1ba431ba510e"). InnerVolumeSpecName "kube-api-access-9qbzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.013985 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a26683b-a0a8-4464-8972-1ba431ba510e" (UID: "4a26683b-a0a8-4464-8972-1ba431ba510e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.047117 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a26683b-a0a8-4464-8972-1ba431ba510e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.047151 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qbzg\" (UniqueName: \"kubernetes.io/projected/4a26683b-a0a8-4464-8972-1ba431ba510e-kube-api-access-9qbzg\") on node \"crc\" DevicePath \"\"" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.412043 4813 generic.go:334] "Generic (PLEG): container finished" podID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerID="eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997" exitCode=0 Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.412147 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkvtl" event={"ID":"4a26683b-a0a8-4464-8972-1ba431ba510e","Type":"ContainerDied","Data":"eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997"} Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.412201 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkvtl" event={"ID":"4a26683b-a0a8-4464-8972-1ba431ba510e","Type":"ContainerDied","Data":"1639d4ecef5df4e0fc7e39fa42982a63d72dd8b65b6c183562652b5cbac7f7d4"} Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.412219 4813 scope.go:117] "RemoveContainer" containerID="eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.413461 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkvtl" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.437972 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vkvtl"] Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.450088 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vkvtl"] Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.453311 4813 scope.go:117] "RemoveContainer" containerID="381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.476689 4813 scope.go:117] "RemoveContainer" containerID="bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.517005 4813 scope.go:117] "RemoveContainer" containerID="eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997" Dec 02 10:40:48 crc kubenswrapper[4813]: E1202 10:40:48.517394 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997\": container with ID starting with eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997 not found: ID does not exist" containerID="eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.517428 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997"} err="failed to get container status \"eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997\": rpc error: code = NotFound desc = could not find container \"eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997\": container with ID starting with eaa16d8de7b19180bf1bcdb3b59a96e78d526eadca57c45d55e3851ab12ad997 not found: ID does not exist" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.517454 4813 scope.go:117] "RemoveContainer" containerID="381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0" Dec 02 10:40:48 crc kubenswrapper[4813]: E1202 10:40:48.517675 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0\": container with ID starting with 381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0 not found: ID does not exist" containerID="381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.517700 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0"} err="failed to get container status \"381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0\": rpc error: code = NotFound desc = could not find container \"381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0\": container with ID starting with 381e256a1e5dbe82d9005cbf47749d76d4a6162621ec3ce4c37a2f72f5d929e0 not found: ID does not exist" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.517715 4813 scope.go:117] "RemoveContainer" containerID="bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b" Dec 02 10:40:48 crc kubenswrapper[4813]: E1202 10:40:48.517942 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b\": container with ID starting with bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b not found: ID does not exist" containerID="bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b" Dec 02 10:40:48 crc kubenswrapper[4813]: I1202 10:40:48.517968 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b"} err="failed to get container status \"bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b\": rpc error: code = NotFound desc = could not find container \"bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b\": container with ID starting with bb6d6cdf32860d237ce27505a3120493458e4e6dc2a784c687c111396b94572b not found: ID does not exist" Dec 02 10:40:50 crc kubenswrapper[4813]: I1202 10:40:50.078184 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" path="/var/lib/kubelet/pods/4a26683b-a0a8-4464-8972-1ba431ba510e/volumes" Dec 02 10:41:02 crc kubenswrapper[4813]: I1202 10:41:02.565467 4813 scope.go:117] "RemoveContainer" containerID="f81750011cdae84975844e776f53d009e1ea89b1d8a433410d1c3ae5e4f7287a" Dec 02 10:41:02 crc kubenswrapper[4813]: I1202 10:41:02.605961 4813 scope.go:117] "RemoveContainer" containerID="615acd113da5506851dfa6e6fb83b319169b19f0349396678ef47107f39ad17f" Dec 02 10:41:04 crc kubenswrapper[4813]: I1202 10:41:04.274156 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:41:04 crc kubenswrapper[4813]: I1202 10:41:04.274262 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.311715 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nvrrh"] Dec 02 10:41:06 crc kubenswrapper[4813]: E1202 10:41:06.312915 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerName="registry-server" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.312944 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerName="registry-server" Dec 02 10:41:06 crc kubenswrapper[4813]: E1202 10:41:06.312992 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerName="extract-content" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.313003 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerName="extract-content" Dec 02 10:41:06 crc kubenswrapper[4813]: E1202 10:41:06.313035 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerName="extract-utilities" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.313047 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerName="extract-utilities" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.313425 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a26683b-a0a8-4464-8972-1ba431ba510e" containerName="registry-server" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.315662 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.325931 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvrrh"] Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.475544 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-catalog-content\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.475607 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-utilities\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.475688 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htq6p\" (UniqueName: \"kubernetes.io/projected/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-kube-api-access-htq6p\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.577447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-catalog-content\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.577511 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-utilities\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.577588 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htq6p\" (UniqueName: \"kubernetes.io/projected/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-kube-api-access-htq6p\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.577996 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-catalog-content\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.578175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-utilities\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.608552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htq6p\" (UniqueName: \"kubernetes.io/projected/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-kube-api-access-htq6p\") pod \"redhat-operators-nvrrh\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:06 crc kubenswrapper[4813]: I1202 10:41:06.641903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:07 crc kubenswrapper[4813]: I1202 10:41:07.147950 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvrrh"] Dec 02 10:41:07 crc kubenswrapper[4813]: I1202 10:41:07.576233 4813 generic.go:334] "Generic (PLEG): container finished" podID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerID="ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017" exitCode=0 Dec 02 10:41:07 crc kubenswrapper[4813]: I1202 10:41:07.576295 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvrrh" event={"ID":"df3a89f6-5638-4128-b7a0-3c0616b3cd9f","Type":"ContainerDied","Data":"ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017"} Dec 02 10:41:07 crc kubenswrapper[4813]: I1202 10:41:07.576341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvrrh" event={"ID":"df3a89f6-5638-4128-b7a0-3c0616b3cd9f","Type":"ContainerStarted","Data":"3b4a1331cf1f67b575754835b52f5b4133faf621ad20196da9afc7fc1851d5e5"} Dec 02 10:41:09 crc kubenswrapper[4813]: I1202 10:41:09.594786 4813 generic.go:334] "Generic (PLEG): container finished" podID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerID="fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93" exitCode=0 Dec 02 10:41:09 crc kubenswrapper[4813]: I1202 10:41:09.594836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvrrh" event={"ID":"df3a89f6-5638-4128-b7a0-3c0616b3cd9f","Type":"ContainerDied","Data":"fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93"} Dec 02 10:41:11 crc kubenswrapper[4813]: I1202 10:41:11.621576 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvrrh" event={"ID":"df3a89f6-5638-4128-b7a0-3c0616b3cd9f","Type":"ContainerStarted","Data":"68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128"} Dec 02 10:41:11 crc kubenswrapper[4813]: I1202 10:41:11.645140 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nvrrh" podStartSLOduration=2.736707296 podStartE2EDuration="5.645120691s" podCreationTimestamp="2025-12-02 10:41:06 +0000 UTC" firstStartedPulling="2025-12-02 10:41:07.577971142 +0000 UTC m=+1991.773145444" lastFinishedPulling="2025-12-02 10:41:10.486384517 +0000 UTC m=+1994.681558839" observedRunningTime="2025-12-02 10:41:11.63600661 +0000 UTC m=+1995.831180932" watchObservedRunningTime="2025-12-02 10:41:11.645120691 +0000 UTC m=+1995.840294993" Dec 02 10:41:16 crc kubenswrapper[4813]: I1202 10:41:16.642621 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:16 crc kubenswrapper[4813]: I1202 10:41:16.643153 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:16 crc kubenswrapper[4813]: I1202 10:41:16.698999 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:16 crc kubenswrapper[4813]: I1202 10:41:16.766364 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:16 crc kubenswrapper[4813]: I1202 10:41:16.936612 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvrrh"] Dec 02 10:41:18 crc kubenswrapper[4813]: I1202 10:41:18.699003 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nvrrh" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerName="registry-server" containerID="cri-o://68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128" gracePeriod=2 Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.116650 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.212686 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-utilities\") pod \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.212845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-catalog-content\") pod \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.212933 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htq6p\" (UniqueName: \"kubernetes.io/projected/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-kube-api-access-htq6p\") pod \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\" (UID: \"df3a89f6-5638-4128-b7a0-3c0616b3cd9f\") " Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.214209 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-utilities" (OuterVolumeSpecName: "utilities") pod "df3a89f6-5638-4128-b7a0-3c0616b3cd9f" (UID: "df3a89f6-5638-4128-b7a0-3c0616b3cd9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.220170 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-kube-api-access-htq6p" (OuterVolumeSpecName: "kube-api-access-htq6p") pod "df3a89f6-5638-4128-b7a0-3c0616b3cd9f" (UID: "df3a89f6-5638-4128-b7a0-3c0616b3cd9f"). InnerVolumeSpecName "kube-api-access-htq6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.317193 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.317774 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htq6p\" (UniqueName: \"kubernetes.io/projected/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-kube-api-access-htq6p\") on node \"crc\" DevicePath \"\"" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.332329 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df3a89f6-5638-4128-b7a0-3c0616b3cd9f" (UID: "df3a89f6-5638-4128-b7a0-3c0616b3cd9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.419722 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df3a89f6-5638-4128-b7a0-3c0616b3cd9f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.709246 4813 generic.go:334] "Generic (PLEG): container finished" podID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerID="68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128" exitCode=0 Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.709293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvrrh" event={"ID":"df3a89f6-5638-4128-b7a0-3c0616b3cd9f","Type":"ContainerDied","Data":"68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128"} Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.709330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvrrh" event={"ID":"df3a89f6-5638-4128-b7a0-3c0616b3cd9f","Type":"ContainerDied","Data":"3b4a1331cf1f67b575754835b52f5b4133faf621ad20196da9afc7fc1851d5e5"} Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.709350 4813 scope.go:117] "RemoveContainer" containerID="68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.709362 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvrrh" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.742388 4813 scope.go:117] "RemoveContainer" containerID="fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.752735 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvrrh"] Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.762500 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nvrrh"] Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.764345 4813 scope.go:117] "RemoveContainer" containerID="ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.808099 4813 scope.go:117] "RemoveContainer" containerID="68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128" Dec 02 10:41:19 crc kubenswrapper[4813]: E1202 10:41:19.808652 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128\": container with ID starting with 68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128 not found: ID does not exist" containerID="68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.808685 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128"} err="failed to get container status \"68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128\": rpc error: code = NotFound desc = could not find container \"68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128\": container with ID starting with 68ba9327606c268e75f293281368f48ec2ee80f640d21f684e3fea4480158128 not found: ID does not exist" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.808709 4813 scope.go:117] "RemoveContainer" containerID="fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93" Dec 02 10:41:19 crc kubenswrapper[4813]: E1202 10:41:19.809234 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93\": container with ID starting with fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93 not found: ID does not exist" containerID="fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.809258 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93"} err="failed to get container status \"fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93\": rpc error: code = NotFound desc = could not find container \"fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93\": container with ID starting with fedc5f1e1a4d7be9f0b689b6a9281df66a6b7767635ac923127d667c490f9f93 not found: ID does not exist" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.809274 4813 scope.go:117] "RemoveContainer" containerID="ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017" Dec 02 10:41:19 crc kubenswrapper[4813]: E1202 10:41:19.809811 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017\": container with ID starting with ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017 not found: ID does not exist" containerID="ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017" Dec 02 10:41:19 crc kubenswrapper[4813]: I1202 10:41:19.809831 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017"} err="failed to get container status \"ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017\": rpc error: code = NotFound desc = could not find container \"ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017\": container with ID starting with ef83b948fe9d4c289b28a6420462bf1dfbd26dfef218ea21fd0fbc659f910017 not found: ID does not exist" Dec 02 10:41:20 crc kubenswrapper[4813]: I1202 10:41:20.083122 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" path="/var/lib/kubelet/pods/df3a89f6-5638-4128-b7a0-3c0616b3cd9f/volumes" Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.274285 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.275691 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.275789 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.277160 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fed071f8239a2c52d08d08e010c11558e2670d682506b388f82e7786b9072db"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.277243 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://2fed071f8239a2c52d08d08e010c11558e2670d682506b388f82e7786b9072db" gracePeriod=600 Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.850816 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="2fed071f8239a2c52d08d08e010c11558e2670d682506b388f82e7786b9072db" exitCode=0 Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.850897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"2fed071f8239a2c52d08d08e010c11558e2670d682506b388f82e7786b9072db"} Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.851174 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152"} Dec 02 10:41:34 crc kubenswrapper[4813]: I1202 10:41:34.851196 4813 scope.go:117] "RemoveContainer" containerID="bbe9acc97187b008604a37265653c5fb82530aa48cd9299db2e76183edacb376" Dec 02 10:42:02 crc kubenswrapper[4813]: I1202 10:42:02.719677 4813 scope.go:117] "RemoveContainer" containerID="2ad0dfd6d0e6d61dbd25c4a061ec329daed94f68a3c3469e9d38e5e9c35cb7a4" Dec 02 10:42:02 crc kubenswrapper[4813]: I1202 10:42:02.760778 4813 scope.go:117] "RemoveContainer" containerID="439f50ace2179343b891cb0d02f690878896a456f5171fc9814b168b42a40940" Dec 02 10:42:02 crc kubenswrapper[4813]: I1202 10:42:02.785094 4813 scope.go:117] "RemoveContainer" containerID="d9ac97e257eeaea5cceccffec6ebf65e1fc5ea925aeda6eb90d03330b785d75c" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.289282 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cm5km"] Dec 02 10:42:12 crc kubenswrapper[4813]: E1202 10:42:12.290296 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerName="extract-utilities" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.290313 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerName="extract-utilities" Dec 02 10:42:12 crc kubenswrapper[4813]: E1202 10:42:12.290329 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerName="extract-content" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.290338 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerName="extract-content" Dec 02 10:42:12 crc kubenswrapper[4813]: E1202 10:42:12.290372 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerName="registry-server" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.290383 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerName="registry-server" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.290613 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3a89f6-5638-4128-b7a0-3c0616b3cd9f" containerName="registry-server" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.292322 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.318266 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cm5km"] Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.423095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk8rl\" (UniqueName: \"kubernetes.io/projected/efe55736-1a1c-4340-be64-e6c63ad77721-kube-api-access-hk8rl\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.423193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-utilities\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.423491 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-catalog-content\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.475307 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v6dmx"] Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.477364 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.493625 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6dmx"] Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.526026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk8rl\" (UniqueName: \"kubernetes.io/projected/efe55736-1a1c-4340-be64-e6c63ad77721-kube-api-access-hk8rl\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.526160 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-utilities\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.526309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-catalog-content\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.526815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-utilities\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.526833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-catalog-content\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.547185 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk8rl\" (UniqueName: \"kubernetes.io/projected/efe55736-1a1c-4340-be64-e6c63ad77721-kube-api-access-hk8rl\") pod \"certified-operators-cm5km\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.613438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.628848 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zcp5\" (UniqueName: \"kubernetes.io/projected/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-kube-api-access-2zcp5\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.628912 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-catalog-content\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.628947 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-utilities\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.730534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-catalog-content\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.730594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-utilities\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.730714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zcp5\" (UniqueName: \"kubernetes.io/projected/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-kube-api-access-2zcp5\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.731146 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-catalog-content\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.731244 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-utilities\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.756283 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zcp5\" (UniqueName: \"kubernetes.io/projected/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-kube-api-access-2zcp5\") pod \"redhat-marketplace-v6dmx\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:12 crc kubenswrapper[4813]: I1202 10:42:12.795573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:13 crc kubenswrapper[4813]: I1202 10:42:13.174238 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cm5km"] Dec 02 10:42:13 crc kubenswrapper[4813]: W1202 10:42:13.180247 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe55736_1a1c_4340_be64_e6c63ad77721.slice/crio-4201244aa6ab648babde0a59b83b3a713bf5b91ba2751ad617bb9e639d8c4034 WatchSource:0}: Error finding container 4201244aa6ab648babde0a59b83b3a713bf5b91ba2751ad617bb9e639d8c4034: Status 404 returned error can't find the container with id 4201244aa6ab648babde0a59b83b3a713bf5b91ba2751ad617bb9e639d8c4034 Dec 02 10:42:13 crc kubenswrapper[4813]: I1202 10:42:13.205859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cm5km" event={"ID":"efe55736-1a1c-4340-be64-e6c63ad77721","Type":"ContainerStarted","Data":"4201244aa6ab648babde0a59b83b3a713bf5b91ba2751ad617bb9e639d8c4034"} Dec 02 10:42:13 crc kubenswrapper[4813]: I1202 10:42:13.378624 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6dmx"] Dec 02 10:42:14 crc kubenswrapper[4813]: I1202 10:42:14.217526 4813 generic.go:334] "Generic (PLEG): container finished" podID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerID="763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9" exitCode=0 Dec 02 10:42:14 crc kubenswrapper[4813]: I1202 10:42:14.217901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6dmx" event={"ID":"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09","Type":"ContainerDied","Data":"763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9"} Dec 02 10:42:14 crc kubenswrapper[4813]: I1202 10:42:14.217954 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6dmx" event={"ID":"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09","Type":"ContainerStarted","Data":"92072b67853a47333640f2c22b828c5febdc7f78f6e4c39f5f984f855b88111d"} Dec 02 10:42:14 crc kubenswrapper[4813]: I1202 10:42:14.222730 4813 generic.go:334] "Generic (PLEG): container finished" podID="efe55736-1a1c-4340-be64-e6c63ad77721" containerID="27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31" exitCode=0 Dec 02 10:42:14 crc kubenswrapper[4813]: I1202 10:42:14.222760 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cm5km" event={"ID":"efe55736-1a1c-4340-be64-e6c63ad77721","Type":"ContainerDied","Data":"27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31"} Dec 02 10:42:16 crc kubenswrapper[4813]: I1202 10:42:16.248747 4813 generic.go:334] "Generic (PLEG): container finished" podID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerID="922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828" exitCode=0 Dec 02 10:42:16 crc kubenswrapper[4813]: I1202 10:42:16.248865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6dmx" event={"ID":"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09","Type":"ContainerDied","Data":"922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828"} Dec 02 10:42:16 crc kubenswrapper[4813]: I1202 10:42:16.252175 4813 generic.go:334] "Generic (PLEG): container finished" podID="efe55736-1a1c-4340-be64-e6c63ad77721" containerID="948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715" exitCode=0 Dec 02 10:42:16 crc kubenswrapper[4813]: I1202 10:42:16.252222 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cm5km" event={"ID":"efe55736-1a1c-4340-be64-e6c63ad77721","Type":"ContainerDied","Data":"948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715"} Dec 02 10:42:17 crc kubenswrapper[4813]: I1202 10:42:17.267466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6dmx" event={"ID":"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09","Type":"ContainerStarted","Data":"1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b"} Dec 02 10:42:17 crc kubenswrapper[4813]: I1202 10:42:17.291650 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v6dmx" podStartSLOduration=2.573365112 podStartE2EDuration="5.291632344s" podCreationTimestamp="2025-12-02 10:42:12 +0000 UTC" firstStartedPulling="2025-12-02 10:42:14.220234275 +0000 UTC m=+2058.415408577" lastFinishedPulling="2025-12-02 10:42:16.938501517 +0000 UTC m=+2061.133675809" observedRunningTime="2025-12-02 10:42:17.288378151 +0000 UTC m=+2061.483552473" watchObservedRunningTime="2025-12-02 10:42:17.291632344 +0000 UTC m=+2061.486806646" Dec 02 10:42:18 crc kubenswrapper[4813]: I1202 10:42:18.277955 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cm5km" event={"ID":"efe55736-1a1c-4340-be64-e6c63ad77721","Type":"ContainerStarted","Data":"d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161"} Dec 02 10:42:18 crc kubenswrapper[4813]: I1202 10:42:18.302513 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cm5km" podStartSLOduration=3.266310306 podStartE2EDuration="6.302496887s" podCreationTimestamp="2025-12-02 10:42:12 +0000 UTC" firstStartedPulling="2025-12-02 10:42:14.225059843 +0000 UTC m=+2058.420234145" lastFinishedPulling="2025-12-02 10:42:17.261246424 +0000 UTC m=+2061.456420726" observedRunningTime="2025-12-02 10:42:18.296560557 +0000 UTC m=+2062.491734859" watchObservedRunningTime="2025-12-02 10:42:18.302496887 +0000 UTC m=+2062.497671189" Dec 02 10:42:22 crc kubenswrapper[4813]: I1202 10:42:22.614291 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:22 crc kubenswrapper[4813]: I1202 10:42:22.614863 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:22 crc kubenswrapper[4813]: I1202 10:42:22.667737 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:22 crc kubenswrapper[4813]: I1202 10:42:22.796317 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:22 crc kubenswrapper[4813]: I1202 10:42:22.797379 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:22 crc kubenswrapper[4813]: I1202 10:42:22.840100 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:23 crc kubenswrapper[4813]: I1202 10:42:23.381432 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:23 crc kubenswrapper[4813]: I1202 10:42:23.382367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:25 crc kubenswrapper[4813]: I1202 10:42:25.464872 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6dmx"] Dec 02 10:42:25 crc kubenswrapper[4813]: I1202 10:42:25.665251 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cm5km"] Dec 02 10:42:25 crc kubenswrapper[4813]: I1202 10:42:25.665485 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cm5km" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" containerName="registry-server" containerID="cri-o://d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161" gracePeriod=2 Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.100366 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.181332 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-utilities\") pod \"efe55736-1a1c-4340-be64-e6c63ad77721\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.181507 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk8rl\" (UniqueName: \"kubernetes.io/projected/efe55736-1a1c-4340-be64-e6c63ad77721-kube-api-access-hk8rl\") pod \"efe55736-1a1c-4340-be64-e6c63ad77721\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.181582 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-catalog-content\") pod \"efe55736-1a1c-4340-be64-e6c63ad77721\" (UID: \"efe55736-1a1c-4340-be64-e6c63ad77721\") " Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.182196 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-utilities" (OuterVolumeSpecName: "utilities") pod "efe55736-1a1c-4340-be64-e6c63ad77721" (UID: "efe55736-1a1c-4340-be64-e6c63ad77721"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.191017 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe55736-1a1c-4340-be64-e6c63ad77721-kube-api-access-hk8rl" (OuterVolumeSpecName: "kube-api-access-hk8rl") pod "efe55736-1a1c-4340-be64-e6c63ad77721" (UID: "efe55736-1a1c-4340-be64-e6c63ad77721"). InnerVolumeSpecName "kube-api-access-hk8rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.229713 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efe55736-1a1c-4340-be64-e6c63ad77721" (UID: "efe55736-1a1c-4340-be64-e6c63ad77721"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.284710 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.284741 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk8rl\" (UniqueName: \"kubernetes.io/projected/efe55736-1a1c-4340-be64-e6c63ad77721-kube-api-access-hk8rl\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.284754 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe55736-1a1c-4340-be64-e6c63ad77721-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.355438 4813 generic.go:334] "Generic (PLEG): container finished" podID="efe55736-1a1c-4340-be64-e6c63ad77721" containerID="d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161" exitCode=0 Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.355520 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cm5km" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.355529 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cm5km" event={"ID":"efe55736-1a1c-4340-be64-e6c63ad77721","Type":"ContainerDied","Data":"d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161"} Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.355569 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cm5km" event={"ID":"efe55736-1a1c-4340-be64-e6c63ad77721","Type":"ContainerDied","Data":"4201244aa6ab648babde0a59b83b3a713bf5b91ba2751ad617bb9e639d8c4034"} Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.355618 4813 scope.go:117] "RemoveContainer" containerID="d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.356038 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v6dmx" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerName="registry-server" containerID="cri-o://1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b" gracePeriod=2 Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.405420 4813 scope.go:117] "RemoveContainer" containerID="948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.406296 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cm5km"] Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.413380 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cm5km"] Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.435040 4813 scope.go:117] "RemoveContainer" containerID="27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.472328 4813 scope.go:117] "RemoveContainer" containerID="d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161" Dec 02 10:42:26 crc kubenswrapper[4813]: E1202 10:42:26.472723 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161\": container with ID starting with d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161 not found: ID does not exist" containerID="d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.472757 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161"} err="failed to get container status \"d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161\": rpc error: code = NotFound desc = could not find container \"d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161\": container with ID starting with d7471ef0b6e2df3ff27747a118ea81ff6e7ccbc5c16f3168d80b6ec68d4b5161 not found: ID does not exist" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.472780 4813 scope.go:117] "RemoveContainer" containerID="948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715" Dec 02 10:42:26 crc kubenswrapper[4813]: E1202 10:42:26.473212 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715\": container with ID starting with 948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715 not found: ID does not exist" containerID="948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.473234 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715"} err="failed to get container status \"948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715\": rpc error: code = NotFound desc = could not find container \"948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715\": container with ID starting with 948b9b8c7bba24b7bf485c5cbeec80d0f8ef678534780fddb9177a591b474715 not found: ID does not exist" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.473248 4813 scope.go:117] "RemoveContainer" containerID="27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31" Dec 02 10:42:26 crc kubenswrapper[4813]: E1202 10:42:26.473806 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31\": container with ID starting with 27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31 not found: ID does not exist" containerID="27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31" Dec 02 10:42:26 crc kubenswrapper[4813]: I1202 10:42:26.473835 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31"} err="failed to get container status \"27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31\": rpc error: code = NotFound desc = could not find container \"27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31\": container with ID starting with 27b9f6f625b2e95e47b099377b8ed9a6c12ba27600bd71ebd5335caca8948a31 not found: ID does not exist" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.327922 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.378986 4813 generic.go:334] "Generic (PLEG): container finished" podID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerID="1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b" exitCode=0 Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.379037 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6dmx" event={"ID":"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09","Type":"ContainerDied","Data":"1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b"} Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.379086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6dmx" event={"ID":"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09","Type":"ContainerDied","Data":"92072b67853a47333640f2c22b828c5febdc7f78f6e4c39f5f984f855b88111d"} Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.379109 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v6dmx" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.379120 4813 scope.go:117] "RemoveContainer" containerID="1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.402354 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-utilities\") pod \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.402366 4813 scope.go:117] "RemoveContainer" containerID="922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.402567 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zcp5\" (UniqueName: \"kubernetes.io/projected/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-kube-api-access-2zcp5\") pod \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.402644 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-catalog-content\") pod \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\" (UID: \"ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09\") " Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.403287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-utilities" (OuterVolumeSpecName: "utilities") pod "ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" (UID: "ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.408415 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-kube-api-access-2zcp5" (OuterVolumeSpecName: "kube-api-access-2zcp5") pod "ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" (UID: "ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09"). InnerVolumeSpecName "kube-api-access-2zcp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.420874 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" (UID: "ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.423413 4813 scope.go:117] "RemoveContainer" containerID="763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.504790 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.504836 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zcp5\" (UniqueName: \"kubernetes.io/projected/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-kube-api-access-2zcp5\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.504851 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.511054 4813 scope.go:117] "RemoveContainer" containerID="1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b" Dec 02 10:42:27 crc kubenswrapper[4813]: E1202 10:42:27.511538 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b\": container with ID starting with 1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b not found: ID does not exist" containerID="1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.511582 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b"} err="failed to get container status \"1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b\": rpc error: code = NotFound desc = could not find container \"1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b\": container with ID starting with 1f896a176cb6bf465c6fc4f24b2e83927618a199c7a7cce2386051a15113189b not found: ID does not exist" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.511609 4813 scope.go:117] "RemoveContainer" containerID="922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828" Dec 02 10:42:27 crc kubenswrapper[4813]: E1202 10:42:27.511953 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828\": container with ID starting with 922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828 not found: ID does not exist" containerID="922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.511987 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828"} err="failed to get container status \"922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828\": rpc error: code = NotFound desc = could not find container \"922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828\": container with ID starting with 922522481abaab8d13da975d760798bbc9cac1f30c85452619f8f4d1824fc828 not found: ID does not exist" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.512014 4813 scope.go:117] "RemoveContainer" containerID="763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9" Dec 02 10:42:27 crc kubenswrapper[4813]: E1202 10:42:27.512397 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9\": container with ID starting with 763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9 not found: ID does not exist" containerID="763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.512439 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9"} err="failed to get container status \"763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9\": rpc error: code = NotFound desc = could not find container \"763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9\": container with ID starting with 763241e004471303d25cc3b881dc24941ecbf7c2999794b65af9a445e0982db9 not found: ID does not exist" Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.718781 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6dmx"] Dec 02 10:42:27 crc kubenswrapper[4813]: I1202 10:42:27.728674 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6dmx"] Dec 02 10:42:28 crc kubenswrapper[4813]: I1202 10:42:28.079507 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" path="/var/lib/kubelet/pods/ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09/volumes" Dec 02 10:42:28 crc kubenswrapper[4813]: I1202 10:42:28.080578 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" path="/var/lib/kubelet/pods/efe55736-1a1c-4340-be64-e6c63ad77721/volumes" Dec 02 10:43:26 crc kubenswrapper[4813]: I1202 10:43:26.948598 4813 generic.go:334] "Generic (PLEG): container finished" podID="baacde6b-2915-4fb8-a6db-793969a48c79" containerID="013a10a8d810d9330165f55d3fce76dec51f3342f90d1104f1de7c96cfab7491" exitCode=0 Dec 02 10:43:26 crc kubenswrapper[4813]: I1202 10:43:26.948649 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" event={"ID":"baacde6b-2915-4fb8-a6db-793969a48c79","Type":"ContainerDied","Data":"013a10a8d810d9330165f55d3fce76dec51f3342f90d1104f1de7c96cfab7491"} Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.409868 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.450552 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zclz7\" (UniqueName: \"kubernetes.io/projected/baacde6b-2915-4fb8-a6db-793969a48c79-kube-api-access-zclz7\") pod \"baacde6b-2915-4fb8-a6db-793969a48c79\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.451789 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-bootstrap-combined-ca-bundle\") pod \"baacde6b-2915-4fb8-a6db-793969a48c79\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.451852 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-inventory\") pod \"baacde6b-2915-4fb8-a6db-793969a48c79\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.451971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-ssh-key\") pod \"baacde6b-2915-4fb8-a6db-793969a48c79\" (UID: \"baacde6b-2915-4fb8-a6db-793969a48c79\") " Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.456934 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "baacde6b-2915-4fb8-a6db-793969a48c79" (UID: "baacde6b-2915-4fb8-a6db-793969a48c79"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.458398 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baacde6b-2915-4fb8-a6db-793969a48c79-kube-api-access-zclz7" (OuterVolumeSpecName: "kube-api-access-zclz7") pod "baacde6b-2915-4fb8-a6db-793969a48c79" (UID: "baacde6b-2915-4fb8-a6db-793969a48c79"). InnerVolumeSpecName "kube-api-access-zclz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.485493 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "baacde6b-2915-4fb8-a6db-793969a48c79" (UID: "baacde6b-2915-4fb8-a6db-793969a48c79"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.496256 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-inventory" (OuterVolumeSpecName: "inventory") pod "baacde6b-2915-4fb8-a6db-793969a48c79" (UID: "baacde6b-2915-4fb8-a6db-793969a48c79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.554044 4813 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.554105 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.554114 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/baacde6b-2915-4fb8-a6db-793969a48c79-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.554123 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zclz7\" (UniqueName: \"kubernetes.io/projected/baacde6b-2915-4fb8-a6db-793969a48c79-kube-api-access-zclz7\") on node \"crc\" DevicePath \"\"" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.970177 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" event={"ID":"baacde6b-2915-4fb8-a6db-793969a48c79","Type":"ContainerDied","Data":"48d0aab910e155f84fbb456be5d0113db6d5443ca22cefcfda71fc1ab7ebc803"} Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.970224 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48d0aab910e155f84fbb456be5d0113db6d5443ca22cefcfda71fc1ab7ebc803" Dec 02 10:43:28 crc kubenswrapper[4813]: I1202 10:43:28.970226 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.058381 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh"] Dec 02 10:43:29 crc kubenswrapper[4813]: E1202 10:43:29.058776 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" containerName="extract-content" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.058794 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" containerName="extract-content" Dec 02 10:43:29 crc kubenswrapper[4813]: E1202 10:43:29.058803 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" containerName="extract-utilities" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.058812 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" containerName="extract-utilities" Dec 02 10:43:29 crc kubenswrapper[4813]: E1202 10:43:29.058825 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baacde6b-2915-4fb8-a6db-793969a48c79" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.058833 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="baacde6b-2915-4fb8-a6db-793969a48c79" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:29 crc kubenswrapper[4813]: E1202 10:43:29.058844 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerName="extract-content" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.058850 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerName="extract-content" Dec 02 10:43:29 crc kubenswrapper[4813]: E1202 10:43:29.058862 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" containerName="registry-server" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.058867 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" containerName="registry-server" Dec 02 10:43:29 crc kubenswrapper[4813]: E1202 10:43:29.058877 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerName="registry-server" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.058882 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerName="registry-server" Dec 02 10:43:29 crc kubenswrapper[4813]: E1202 10:43:29.058902 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerName="extract-utilities" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.058908 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerName="extract-utilities" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.059128 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="baacde6b-2915-4fb8-a6db-793969a48c79" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.059151 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5cabe7-ee0e-4d14-ade4-d7ea9d7dde09" containerName="registry-server" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.059165 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe55736-1a1c-4340-be64-e6c63ad77721" containerName="registry-server" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.059796 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.063340 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.063531 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.063703 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.063878 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.070706 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh"] Dec 02 10:43:29 crc kubenswrapper[4813]: E1202 10:43:29.084928 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaacde6b_2915_4fb8_a6db_793969a48c79.slice\": RecentStats: unable to find data in memory cache]" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.162186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.162463 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.162488 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v97q\" (UniqueName: \"kubernetes.io/projected/f07ad40f-936a-40fd-b69e-308239229a25-kube-api-access-2v97q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.264276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.264407 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.264452 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v97q\" (UniqueName: \"kubernetes.io/projected/f07ad40f-936a-40fd-b69e-308239229a25-kube-api-access-2v97q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.269760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.280358 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.282407 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v97q\" (UniqueName: \"kubernetes.io/projected/f07ad40f-936a-40fd-b69e-308239229a25-kube-api-access-2v97q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.377462 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.878027 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh"] Dec 02 10:43:29 crc kubenswrapper[4813]: I1202 10:43:29.979356 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" event={"ID":"f07ad40f-936a-40fd-b69e-308239229a25","Type":"ContainerStarted","Data":"70bb6c7de2a5a4f52ddac7b6d437506c7a3fe6da375b762096849411282f164c"} Dec 02 10:43:32 crc kubenswrapper[4813]: I1202 10:43:32.000787 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" event={"ID":"f07ad40f-936a-40fd-b69e-308239229a25","Type":"ContainerStarted","Data":"e91c37cf4976becdaccf8755c31d331f932fc0906df7b30d78746ebffcf78d66"} Dec 02 10:43:32 crc kubenswrapper[4813]: I1202 10:43:32.019786 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" podStartSLOduration=1.636354749 podStartE2EDuration="3.019767625s" podCreationTimestamp="2025-12-02 10:43:29 +0000 UTC" firstStartedPulling="2025-12-02 10:43:29.887783483 +0000 UTC m=+2134.082957785" lastFinishedPulling="2025-12-02 10:43:31.271196369 +0000 UTC m=+2135.466370661" observedRunningTime="2025-12-02 10:43:32.016136091 +0000 UTC m=+2136.211310403" watchObservedRunningTime="2025-12-02 10:43:32.019767625 +0000 UTC m=+2136.214941927" Dec 02 10:43:34 crc kubenswrapper[4813]: I1202 10:43:34.273688 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:43:34 crc kubenswrapper[4813]: I1202 10:43:34.274032 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:44:02 crc kubenswrapper[4813]: I1202 10:44:02.943965 4813 scope.go:117] "RemoveContainer" containerID="27d0462ab69a51663cc9a03a107b69b87bc0cc30dbe28a8fe161df16efdd9bb4" Dec 02 10:44:02 crc kubenswrapper[4813]: I1202 10:44:02.970530 4813 scope.go:117] "RemoveContainer" containerID="a3a5a4c80dff66b1926bcfce11c24da4ecf3bb46bb522653617495a8e48d7dce" Dec 02 10:44:02 crc kubenswrapper[4813]: I1202 10:44:02.994784 4813 scope.go:117] "RemoveContainer" containerID="1bf19394cb6bdd889ac132b48a3fe1f34449761f1e21d23933e81fb1de54cdd4" Dec 02 10:44:03 crc kubenswrapper[4813]: I1202 10:44:03.021836 4813 scope.go:117] "RemoveContainer" containerID="97fa8f73017c583f9c9c2f7aebda76da5696c5b83dc30e505025a1bf267ae4b0" Dec 02 10:44:04 crc kubenswrapper[4813]: I1202 10:44:04.274164 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:44:04 crc kubenswrapper[4813]: I1202 10:44:04.274700 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.040892 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9de0-account-create-update-w7fll"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.050223 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-49bf-account-create-update-vd578"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.058708 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-q8lrk"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.066449 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-k64ck"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.075200 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-l5fzt"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.082049 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9de0-account-create-update-w7fll"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.089209 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-445c-account-create-update-8rjvh"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.097134 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-49bf-account-create-update-vd578"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.104775 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-l5fzt"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.112060 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-q8lrk"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.123328 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-k64ck"] Dec 02 10:44:33 crc kubenswrapper[4813]: I1202 10:44:33.135620 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-445c-account-create-update-8rjvh"] Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.098136 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5d7f6a-a5a9-4bc8-a12d-0be10887c252" path="/var/lib/kubelet/pods/2e5d7f6a-a5a9-4bc8-a12d-0be10887c252/volumes" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.100156 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392875ff-4cd5-400d-b6e3-4c07d4b332ec" path="/var/lib/kubelet/pods/392875ff-4cd5-400d-b6e3-4c07d4b332ec/volumes" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.101341 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41332d45-348e-4649-9343-110363ba5ee0" path="/var/lib/kubelet/pods/41332d45-348e-4649-9343-110363ba5ee0/volumes" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.102462 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f1a6c3c-c5fd-402d-9fda-b497be370d4c" path="/var/lib/kubelet/pods/9f1a6c3c-c5fd-402d-9fda-b497be370d4c/volumes" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.104337 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac720eab-315f-4740-adb3-329feec1e5ef" path="/var/lib/kubelet/pods/ac720eab-315f-4740-adb3-329feec1e5ef/volumes" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.105580 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a79b83-9538-4382-acb8-b44688f3e2ce" path="/var/lib/kubelet/pods/b8a79b83-9538-4382-acb8-b44688f3e2ce/volumes" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.274222 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.274301 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.274416 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.275304 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.275384 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" gracePeriod=600 Dec 02 10:44:34 crc kubenswrapper[4813]: E1202 10:44:34.398844 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.744475 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" exitCode=0 Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.744540 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152"} Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.744629 4813 scope.go:117] "RemoveContainer" containerID="2fed071f8239a2c52d08d08e010c11558e2670d682506b388f82e7786b9072db" Dec 02 10:44:34 crc kubenswrapper[4813]: I1202 10:44:34.745515 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:44:34 crc kubenswrapper[4813]: E1202 10:44:34.745958 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:44:45 crc kubenswrapper[4813]: I1202 10:44:45.870837 4813 generic.go:334] "Generic (PLEG): container finished" podID="f07ad40f-936a-40fd-b69e-308239229a25" containerID="e91c37cf4976becdaccf8755c31d331f932fc0906df7b30d78746ebffcf78d66" exitCode=0 Dec 02 10:44:45 crc kubenswrapper[4813]: I1202 10:44:45.871002 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" event={"ID":"f07ad40f-936a-40fd-b69e-308239229a25","Type":"ContainerDied","Data":"e91c37cf4976becdaccf8755c31d331f932fc0906df7b30d78746ebffcf78d66"} Dec 02 10:44:46 crc kubenswrapper[4813]: I1202 10:44:46.078687 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:44:46 crc kubenswrapper[4813]: E1202 10:44:46.079020 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.250145 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.399890 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v97q\" (UniqueName: \"kubernetes.io/projected/f07ad40f-936a-40fd-b69e-308239229a25-kube-api-access-2v97q\") pod \"f07ad40f-936a-40fd-b69e-308239229a25\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.400018 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-ssh-key\") pod \"f07ad40f-936a-40fd-b69e-308239229a25\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.400188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-inventory\") pod \"f07ad40f-936a-40fd-b69e-308239229a25\" (UID: \"f07ad40f-936a-40fd-b69e-308239229a25\") " Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.405575 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07ad40f-936a-40fd-b69e-308239229a25-kube-api-access-2v97q" (OuterVolumeSpecName: "kube-api-access-2v97q") pod "f07ad40f-936a-40fd-b69e-308239229a25" (UID: "f07ad40f-936a-40fd-b69e-308239229a25"). InnerVolumeSpecName "kube-api-access-2v97q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.425128 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f07ad40f-936a-40fd-b69e-308239229a25" (UID: "f07ad40f-936a-40fd-b69e-308239229a25"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.432269 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-inventory" (OuterVolumeSpecName: "inventory") pod "f07ad40f-936a-40fd-b69e-308239229a25" (UID: "f07ad40f-936a-40fd-b69e-308239229a25"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.502911 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v97q\" (UniqueName: \"kubernetes.io/projected/f07ad40f-936a-40fd-b69e-308239229a25-kube-api-access-2v97q\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.502980 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.502994 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f07ad40f-936a-40fd-b69e-308239229a25-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.908020 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" event={"ID":"f07ad40f-936a-40fd-b69e-308239229a25","Type":"ContainerDied","Data":"70bb6c7de2a5a4f52ddac7b6d437506c7a3fe6da375b762096849411282f164c"} Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.908074 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70bb6c7de2a5a4f52ddac7b6d437506c7a3fe6da375b762096849411282f164c" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.908516 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.985621 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4"] Dec 02 10:44:47 crc kubenswrapper[4813]: E1202 10:44:47.986415 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07ad40f-936a-40fd-b69e-308239229a25" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.986566 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07ad40f-936a-40fd-b69e-308239229a25" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.987480 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07ad40f-936a-40fd-b69e-308239229a25" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.989807 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.993799 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.993882 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.993910 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.993880 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:44:47 crc kubenswrapper[4813]: I1202 10:44:47.995996 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4"] Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.116764 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.117309 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqbt\" (UniqueName: \"kubernetes.io/projected/b1393a3f-de97-4a49-a26c-c371126a3395-kube-api-access-tkqbt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.117446 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.218702 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.219240 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqbt\" (UniqueName: \"kubernetes.io/projected/b1393a3f-de97-4a49-a26c-c371126a3395-kube-api-access-tkqbt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.219357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.222668 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.228486 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.238723 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqbt\" (UniqueName: \"kubernetes.io/projected/b1393a3f-de97-4a49-a26c-c371126a3395-kube-api-access-tkqbt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.318514 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.842710 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4"] Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.852564 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:44:48 crc kubenswrapper[4813]: I1202 10:44:48.917572 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" event={"ID":"b1393a3f-de97-4a49-a26c-c371126a3395","Type":"ContainerStarted","Data":"d369c885516b9aa60828fbe8641fe88e1f5f00dd4e291d71f19654d86415987d"} Dec 02 10:44:50 crc kubenswrapper[4813]: I1202 10:44:50.940786 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" event={"ID":"b1393a3f-de97-4a49-a26c-c371126a3395","Type":"ContainerStarted","Data":"84baaadf1576e6c32afc7afac7d36000feb8099bcc83126946aa3c84021680b9"} Dec 02 10:44:50 crc kubenswrapper[4813]: I1202 10:44:50.968279 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" podStartSLOduration=2.999225166 podStartE2EDuration="3.968254991s" podCreationTimestamp="2025-12-02 10:44:47 +0000 UTC" firstStartedPulling="2025-12-02 10:44:48.852354909 +0000 UTC m=+2213.047529211" lastFinishedPulling="2025-12-02 10:44:49.821384734 +0000 UTC m=+2214.016559036" observedRunningTime="2025-12-02 10:44:50.960676764 +0000 UTC m=+2215.155851076" watchObservedRunningTime="2025-12-02 10:44:50.968254991 +0000 UTC m=+2215.163429323" Dec 02 10:44:55 crc kubenswrapper[4813]: I1202 10:44:55.984951 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1393a3f-de97-4a49-a26c-c371126a3395" containerID="84baaadf1576e6c32afc7afac7d36000feb8099bcc83126946aa3c84021680b9" exitCode=0 Dec 02 10:44:55 crc kubenswrapper[4813]: I1202 10:44:55.985088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" event={"ID":"b1393a3f-de97-4a49-a26c-c371126a3395","Type":"ContainerDied","Data":"84baaadf1576e6c32afc7afac7d36000feb8099bcc83126946aa3c84021680b9"} Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.373549 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.495831 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-ssh-key\") pod \"b1393a3f-de97-4a49-a26c-c371126a3395\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.495912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkqbt\" (UniqueName: \"kubernetes.io/projected/b1393a3f-de97-4a49-a26c-c371126a3395-kube-api-access-tkqbt\") pod \"b1393a3f-de97-4a49-a26c-c371126a3395\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.495996 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-inventory\") pod \"b1393a3f-de97-4a49-a26c-c371126a3395\" (UID: \"b1393a3f-de97-4a49-a26c-c371126a3395\") " Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.501168 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1393a3f-de97-4a49-a26c-c371126a3395-kube-api-access-tkqbt" (OuterVolumeSpecName: "kube-api-access-tkqbt") pod "b1393a3f-de97-4a49-a26c-c371126a3395" (UID: "b1393a3f-de97-4a49-a26c-c371126a3395"). InnerVolumeSpecName "kube-api-access-tkqbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.520220 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b1393a3f-de97-4a49-a26c-c371126a3395" (UID: "b1393a3f-de97-4a49-a26c-c371126a3395"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.522879 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-inventory" (OuterVolumeSpecName: "inventory") pod "b1393a3f-de97-4a49-a26c-c371126a3395" (UID: "b1393a3f-de97-4a49-a26c-c371126a3395"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.598185 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.598225 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkqbt\" (UniqueName: \"kubernetes.io/projected/b1393a3f-de97-4a49-a26c-c371126a3395-kube-api-access-tkqbt\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:57 crc kubenswrapper[4813]: I1202 10:44:57.598237 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1393a3f-de97-4a49-a26c-c371126a3395-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.005325 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" event={"ID":"b1393a3f-de97-4a49-a26c-c371126a3395","Type":"ContainerDied","Data":"d369c885516b9aa60828fbe8641fe88e1f5f00dd4e291d71f19654d86415987d"} Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.005365 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.005374 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d369c885516b9aa60828fbe8641fe88e1f5f00dd4e291d71f19654d86415987d" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.068329 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:44:58 crc kubenswrapper[4813]: E1202 10:44:58.068885 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.081670 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz"] Dec 02 10:44:58 crc kubenswrapper[4813]: E1202 10:44:58.082209 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1393a3f-de97-4a49-a26c-c371126a3395" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.082232 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1393a3f-de97-4a49-a26c-c371126a3395" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.082653 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1393a3f-de97-4a49-a26c-c371126a3395" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.083464 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.088206 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.088641 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.088686 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.088786 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.092848 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz"] Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.216192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kh5\" (UniqueName: \"kubernetes.io/projected/44d68626-018a-4ac1-a15f-35e9ad8444f4-kube-api-access-82kh5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.216533 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.216609 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.318531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82kh5\" (UniqueName: \"kubernetes.io/projected/44d68626-018a-4ac1-a15f-35e9ad8444f4-kube-api-access-82kh5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.319014 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.319206 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.327933 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.328398 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.344752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82kh5\" (UniqueName: \"kubernetes.io/projected/44d68626-018a-4ac1-a15f-35e9ad8444f4-kube-api-access-82kh5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ftkfz\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:58 crc kubenswrapper[4813]: I1202 10:44:58.416185 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:44:59 crc kubenswrapper[4813]: I1202 10:44:59.051153 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xqrjx"] Dec 02 10:44:59 crc kubenswrapper[4813]: I1202 10:44:59.059619 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xqrjx"] Dec 02 10:44:59 crc kubenswrapper[4813]: I1202 10:44:59.671680 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz"] Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.025288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" event={"ID":"44d68626-018a-4ac1-a15f-35e9ad8444f4","Type":"ContainerStarted","Data":"bbb3bc1024488a377612395e4e7d32c0ae150f4ce3eecff3aa12cdcb4b46591a"} Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.081849 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5ddb27-75c7-48ac-9357-409e97f3020e" path="/var/lib/kubelet/pods/8f5ddb27-75c7-48ac-9357-409e97f3020e/volumes" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.144188 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6"] Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.146027 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.148338 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.149903 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.154933 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6"] Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.257220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7473fd60-f8fe-4e62-bc39-1147d6a57f71-config-volume\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.257638 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmghz\" (UniqueName: \"kubernetes.io/projected/7473fd60-f8fe-4e62-bc39-1147d6a57f71-kube-api-access-lmghz\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.257734 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7473fd60-f8fe-4e62-bc39-1147d6a57f71-secret-volume\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.359826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7473fd60-f8fe-4e62-bc39-1147d6a57f71-config-volume\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.359896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmghz\" (UniqueName: \"kubernetes.io/projected/7473fd60-f8fe-4e62-bc39-1147d6a57f71-kube-api-access-lmghz\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.359965 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7473fd60-f8fe-4e62-bc39-1147d6a57f71-secret-volume\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.360776 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7473fd60-f8fe-4e62-bc39-1147d6a57f71-config-volume\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.373862 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7473fd60-f8fe-4e62-bc39-1147d6a57f71-secret-volume\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.376456 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmghz\" (UniqueName: \"kubernetes.io/projected/7473fd60-f8fe-4e62-bc39-1147d6a57f71-kube-api-access-lmghz\") pod \"collect-profiles-29411205-9kpq6\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.491233 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:00 crc kubenswrapper[4813]: I1202 10:45:00.738193 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6"] Dec 02 10:45:01 crc kubenswrapper[4813]: I1202 10:45:01.039042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" event={"ID":"7473fd60-f8fe-4e62-bc39-1147d6a57f71","Type":"ContainerStarted","Data":"9b7bd818ccd689be9838893be514e98b4b8a1c0acc2fa16d4b6cd0505744adee"} Dec 02 10:45:01 crc kubenswrapper[4813]: I1202 10:45:01.039448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" event={"ID":"7473fd60-f8fe-4e62-bc39-1147d6a57f71","Type":"ContainerStarted","Data":"fd013e84ff1bec86410a335ec270740bea8d9a970ccc1215a49656184b286642"} Dec 02 10:45:01 crc kubenswrapper[4813]: I1202 10:45:01.041140 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" event={"ID":"44d68626-018a-4ac1-a15f-35e9ad8444f4","Type":"ContainerStarted","Data":"29a18c84a2cc1837c9ae0297501b18069ba9e0406ec818c74c2d7bef5baaefd3"} Dec 02 10:45:01 crc kubenswrapper[4813]: I1202 10:45:01.062180 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" podStartSLOduration=1.062162984 podStartE2EDuration="1.062162984s" podCreationTimestamp="2025-12-02 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 10:45:01.058252522 +0000 UTC m=+2225.253426844" watchObservedRunningTime="2025-12-02 10:45:01.062162984 +0000 UTC m=+2225.257337286" Dec 02 10:45:01 crc kubenswrapper[4813]: I1202 10:45:01.083693 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" podStartSLOduration=2.566407044 podStartE2EDuration="3.083670189s" podCreationTimestamp="2025-12-02 10:44:58 +0000 UTC" firstStartedPulling="2025-12-02 10:44:59.673629331 +0000 UTC m=+2223.868803633" lastFinishedPulling="2025-12-02 10:45:00.190892476 +0000 UTC m=+2224.386066778" observedRunningTime="2025-12-02 10:45:01.081612631 +0000 UTC m=+2225.276786933" watchObservedRunningTime="2025-12-02 10:45:01.083670189 +0000 UTC m=+2225.278844501" Dec 02 10:45:01 crc kubenswrapper[4813]: E1202 10:45:01.253669 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7473fd60_f8fe_4e62_bc39_1147d6a57f71.slice/crio-9b7bd818ccd689be9838893be514e98b4b8a1c0acc2fa16d4b6cd0505744adee.scope\": RecentStats: unable to find data in memory cache]" Dec 02 10:45:02 crc kubenswrapper[4813]: I1202 10:45:02.054808 4813 generic.go:334] "Generic (PLEG): container finished" podID="7473fd60-f8fe-4e62-bc39-1147d6a57f71" containerID="9b7bd818ccd689be9838893be514e98b4b8a1c0acc2fa16d4b6cd0505744adee" exitCode=0 Dec 02 10:45:02 crc kubenswrapper[4813]: I1202 10:45:02.054932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" event={"ID":"7473fd60-f8fe-4e62-bc39-1147d6a57f71","Type":"ContainerDied","Data":"9b7bd818ccd689be9838893be514e98b4b8a1c0acc2fa16d4b6cd0505744adee"} Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.034895 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7skjt"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.050616 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2mwx2"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.067832 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-802e-account-create-update-ggb6g"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.081375 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bcl76"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.089713 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-802e-account-create-update-ggb6g"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.096752 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2mwx2"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.104273 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7skjt"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.120029 4813 scope.go:117] "RemoveContainer" containerID="b19f88f9ad986048b32b44369beaa8a98ce923f5327f11c3d7eba5cdb68e94ef" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.121209 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-20a8-account-create-update-8lpn8"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.130989 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bcl76"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.147714 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-20a8-account-create-update-8lpn8"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.153544 4813 scope.go:117] "RemoveContainer" containerID="b8910dd4bb237b8de01a04bfe71d6c05c4ac6ce6590291541f5aa8da83858747" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.165996 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7666-account-create-update-zclc9"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.178920 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7666-account-create-update-zclc9"] Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.202059 4813 scope.go:117] "RemoveContainer" containerID="07c206cae49861eb5882c29952aaa0898704e1f8cb40b7d499d16de897b8c911" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.258711 4813 scope.go:117] "RemoveContainer" containerID="fd46bb4f91584876c71c901bd4665392b6c2066e1959637c7d41c7de3588b228" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.300802 4813 scope.go:117] "RemoveContainer" containerID="203a43e61a3787fb1b06fd4f4c559f16cb75de12a716543a2a02efe839a8aa8e" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.378191 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.385308 4813 scope.go:117] "RemoveContainer" containerID="cb30d85b987a54d447f5ca49e143873b4a00c21d4b57b7aaa2787b2ba3b45d12" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.410051 4813 scope.go:117] "RemoveContainer" containerID="501561c880e33e41c85923830c2bdc838ecebad31ecf8419f8a4acd3639e0719" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.525897 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7473fd60-f8fe-4e62-bc39-1147d6a57f71-config-volume\") pod \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.526049 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7473fd60-f8fe-4e62-bc39-1147d6a57f71-secret-volume\") pod \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.526277 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmghz\" (UniqueName: \"kubernetes.io/projected/7473fd60-f8fe-4e62-bc39-1147d6a57f71-kube-api-access-lmghz\") pod \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\" (UID: \"7473fd60-f8fe-4e62-bc39-1147d6a57f71\") " Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.526829 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7473fd60-f8fe-4e62-bc39-1147d6a57f71-config-volume" (OuterVolumeSpecName: "config-volume") pod "7473fd60-f8fe-4e62-bc39-1147d6a57f71" (UID: "7473fd60-f8fe-4e62-bc39-1147d6a57f71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.531992 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7473fd60-f8fe-4e62-bc39-1147d6a57f71-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7473fd60-f8fe-4e62-bc39-1147d6a57f71" (UID: "7473fd60-f8fe-4e62-bc39-1147d6a57f71"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.531999 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7473fd60-f8fe-4e62-bc39-1147d6a57f71-kube-api-access-lmghz" (OuterVolumeSpecName: "kube-api-access-lmghz") pod "7473fd60-f8fe-4e62-bc39-1147d6a57f71" (UID: "7473fd60-f8fe-4e62-bc39-1147d6a57f71"). InnerVolumeSpecName "kube-api-access-lmghz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.628583 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7473fd60-f8fe-4e62-bc39-1147d6a57f71-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.628625 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7473fd60-f8fe-4e62-bc39-1147d6a57f71-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:03 crc kubenswrapper[4813]: I1202 10:45:03.628639 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmghz\" (UniqueName: \"kubernetes.io/projected/7473fd60-f8fe-4e62-bc39-1147d6a57f71-kube-api-access-lmghz\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.075198 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.083773 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2a6381-8062-465d-b129-ac4153f9305e" path="/var/lib/kubelet/pods/3b2a6381-8062-465d-b129-ac4153f9305e/volumes" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.085054 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5e6919-d274-40a5-b500-77f83781a452" path="/var/lib/kubelet/pods/4f5e6919-d274-40a5-b500-77f83781a452/volumes" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.086786 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce95401-372b-4cad-b5b7-82d3575cd3da" path="/var/lib/kubelet/pods/7ce95401-372b-4cad-b5b7-82d3575cd3da/volumes" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.087879 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f206f49-f7ca-479b-843e-2377ddb90ce1" path="/var/lib/kubelet/pods/8f206f49-f7ca-479b-843e-2377ddb90ce1/volumes" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.089012 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b20b5439-344c-4a1b-a474-d9e7939e7e3e" path="/var/lib/kubelet/pods/b20b5439-344c-4a1b-a474-d9e7939e7e3e/volumes" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.091251 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda52a79-9d85-4a62-935e-b0e43270148c" path="/var/lib/kubelet/pods/fda52a79-9d85-4a62-935e-b0e43270148c/volumes" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.092690 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6" event={"ID":"7473fd60-f8fe-4e62-bc39-1147d6a57f71","Type":"ContainerDied","Data":"fd013e84ff1bec86410a335ec270740bea8d9a970ccc1215a49656184b286642"} Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.092743 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd013e84ff1bec86410a335ec270740bea8d9a970ccc1215a49656184b286642" Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.436231 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg"] Dec 02 10:45:04 crc kubenswrapper[4813]: I1202 10:45:04.446748 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411160-k99vg"] Dec 02 10:45:06 crc kubenswrapper[4813]: I1202 10:45:06.084753 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec41424c-e403-485d-aa92-32c0c41e7238" path="/var/lib/kubelet/pods/ec41424c-e403-485d-aa92-32c0c41e7238/volumes" Dec 02 10:45:07 crc kubenswrapper[4813]: I1202 10:45:07.033934 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hn79k"] Dec 02 10:45:07 crc kubenswrapper[4813]: I1202 10:45:07.062087 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hn79k"] Dec 02 10:45:08 crc kubenswrapper[4813]: I1202 10:45:08.084404 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82676b1f-f4f3-42df-be67-62bbd3373116" path="/var/lib/kubelet/pods/82676b1f-f4f3-42df-be67-62bbd3373116/volumes" Dec 02 10:45:12 crc kubenswrapper[4813]: I1202 10:45:12.068977 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:45:12 crc kubenswrapper[4813]: E1202 10:45:12.069812 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:45:26 crc kubenswrapper[4813]: I1202 10:45:26.072937 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:45:26 crc kubenswrapper[4813]: E1202 10:45:26.073594 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:45:38 crc kubenswrapper[4813]: I1202 10:45:38.068328 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:45:38 crc kubenswrapper[4813]: E1202 10:45:38.069228 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:45:41 crc kubenswrapper[4813]: I1202 10:45:41.429665 4813 generic.go:334] "Generic (PLEG): container finished" podID="44d68626-018a-4ac1-a15f-35e9ad8444f4" containerID="29a18c84a2cc1837c9ae0297501b18069ba9e0406ec818c74c2d7bef5baaefd3" exitCode=0 Dec 02 10:45:41 crc kubenswrapper[4813]: I1202 10:45:41.429749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" event={"ID":"44d68626-018a-4ac1-a15f-35e9ad8444f4","Type":"ContainerDied","Data":"29a18c84a2cc1837c9ae0297501b18069ba9e0406ec818c74c2d7bef5baaefd3"} Dec 02 10:45:42 crc kubenswrapper[4813]: I1202 10:45:42.815812 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:45:42 crc kubenswrapper[4813]: I1202 10:45:42.990818 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-inventory\") pod \"44d68626-018a-4ac1-a15f-35e9ad8444f4\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " Dec 02 10:45:42 crc kubenswrapper[4813]: I1202 10:45:42.991108 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-ssh-key\") pod \"44d68626-018a-4ac1-a15f-35e9ad8444f4\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " Dec 02 10:45:42 crc kubenswrapper[4813]: I1202 10:45:42.991162 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82kh5\" (UniqueName: \"kubernetes.io/projected/44d68626-018a-4ac1-a15f-35e9ad8444f4-kube-api-access-82kh5\") pod \"44d68626-018a-4ac1-a15f-35e9ad8444f4\" (UID: \"44d68626-018a-4ac1-a15f-35e9ad8444f4\") " Dec 02 10:45:42 crc kubenswrapper[4813]: I1202 10:45:42.999349 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d68626-018a-4ac1-a15f-35e9ad8444f4-kube-api-access-82kh5" (OuterVolumeSpecName: "kube-api-access-82kh5") pod "44d68626-018a-4ac1-a15f-35e9ad8444f4" (UID: "44d68626-018a-4ac1-a15f-35e9ad8444f4"). InnerVolumeSpecName "kube-api-access-82kh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.017787 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-inventory" (OuterVolumeSpecName: "inventory") pod "44d68626-018a-4ac1-a15f-35e9ad8444f4" (UID: "44d68626-018a-4ac1-a15f-35e9ad8444f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.027927 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "44d68626-018a-4ac1-a15f-35e9ad8444f4" (UID: "44d68626-018a-4ac1-a15f-35e9ad8444f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.093657 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.093697 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/44d68626-018a-4ac1-a15f-35e9ad8444f4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.093710 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82kh5\" (UniqueName: \"kubernetes.io/projected/44d68626-018a-4ac1-a15f-35e9ad8444f4-kube-api-access-82kh5\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.448959 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" event={"ID":"44d68626-018a-4ac1-a15f-35e9ad8444f4","Type":"ContainerDied","Data":"bbb3bc1024488a377612395e4e7d32c0ae150f4ce3eecff3aa12cdcb4b46591a"} Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.449007 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb3bc1024488a377612395e4e7d32c0ae150f4ce3eecff3aa12cdcb4b46591a" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.449014 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.532267 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r"] Dec 02 10:45:43 crc kubenswrapper[4813]: E1202 10:45:43.532727 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d68626-018a-4ac1-a15f-35e9ad8444f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.532750 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d68626-018a-4ac1-a15f-35e9ad8444f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:43 crc kubenswrapper[4813]: E1202 10:45:43.532768 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7473fd60-f8fe-4e62-bc39-1147d6a57f71" containerName="collect-profiles" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.532777 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7473fd60-f8fe-4e62-bc39-1147d6a57f71" containerName="collect-profiles" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.536532 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7473fd60-f8fe-4e62-bc39-1147d6a57f71" containerName="collect-profiles" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.536657 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d68626-018a-4ac1-a15f-35e9ad8444f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.537856 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.540019 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.540049 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.542882 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.543385 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.547573 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r"] Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.709831 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.710279 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.710678 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcgmv\" (UniqueName: \"kubernetes.io/projected/41db46e0-68d6-435f-a4b8-16a67663eedf-kube-api-access-gcgmv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.812098 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcgmv\" (UniqueName: \"kubernetes.io/projected/41db46e0-68d6-435f-a4b8-16a67663eedf-kube-api-access-gcgmv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.812189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.812252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.815703 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.815769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.828809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcgmv\" (UniqueName: \"kubernetes.io/projected/41db46e0-68d6-435f-a4b8-16a67663eedf-kube-api-access-gcgmv\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:43 crc kubenswrapper[4813]: I1202 10:45:43.854662 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:44 crc kubenswrapper[4813]: I1202 10:45:44.040697 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4md7t"] Dec 02 10:45:44 crc kubenswrapper[4813]: I1202 10:45:44.052847 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4md7t"] Dec 02 10:45:44 crc kubenswrapper[4813]: I1202 10:45:44.079907 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb09f23-052f-4207-8c2b-ea7736d76499" path="/var/lib/kubelet/pods/ceb09f23-052f-4207-8c2b-ea7736d76499/volumes" Dec 02 10:45:44 crc kubenswrapper[4813]: I1202 10:45:44.388097 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r"] Dec 02 10:45:44 crc kubenswrapper[4813]: I1202 10:45:44.460343 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" event={"ID":"41db46e0-68d6-435f-a4b8-16a67663eedf","Type":"ContainerStarted","Data":"ff25547d1a8b30edd296b8d225585e9299bb4f75b8fa3a797fc834507e697c25"} Dec 02 10:45:45 crc kubenswrapper[4813]: I1202 10:45:45.469020 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" event={"ID":"41db46e0-68d6-435f-a4b8-16a67663eedf","Type":"ContainerStarted","Data":"bc382d64ecb40b9d3065d462033b7aaab6771df7368ef4c2f3632c71abb141f8"} Dec 02 10:45:45 crc kubenswrapper[4813]: I1202 10:45:45.487093 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" podStartSLOduration=1.9307414760000001 podStartE2EDuration="2.487050198s" podCreationTimestamp="2025-12-02 10:45:43 +0000 UTC" firstStartedPulling="2025-12-02 10:45:44.396211556 +0000 UTC m=+2268.591385858" lastFinishedPulling="2025-12-02 10:45:44.952520278 +0000 UTC m=+2269.147694580" observedRunningTime="2025-12-02 10:45:45.484092204 +0000 UTC m=+2269.679266516" watchObservedRunningTime="2025-12-02 10:45:45.487050198 +0000 UTC m=+2269.682224500" Dec 02 10:45:46 crc kubenswrapper[4813]: I1202 10:45:46.031639 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d585l"] Dec 02 10:45:46 crc kubenswrapper[4813]: I1202 10:45:46.038684 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d585l"] Dec 02 10:45:46 crc kubenswrapper[4813]: I1202 10:45:46.081289 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad57710-e572-4e03-8f87-6770c28d8c0c" path="/var/lib/kubelet/pods/aad57710-e572-4e03-8f87-6770c28d8c0c/volumes" Dec 02 10:45:49 crc kubenswrapper[4813]: I1202 10:45:49.503799 4813 generic.go:334] "Generic (PLEG): container finished" podID="41db46e0-68d6-435f-a4b8-16a67663eedf" containerID="bc382d64ecb40b9d3065d462033b7aaab6771df7368ef4c2f3632c71abb141f8" exitCode=0 Dec 02 10:45:49 crc kubenswrapper[4813]: I1202 10:45:49.503900 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" event={"ID":"41db46e0-68d6-435f-a4b8-16a67663eedf","Type":"ContainerDied","Data":"bc382d64ecb40b9d3065d462033b7aaab6771df7368ef4c2f3632c71abb141f8"} Dec 02 10:45:50 crc kubenswrapper[4813]: I1202 10:45:50.910819 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.026379 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kd7cw"] Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.034597 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kd7cw"] Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.041040 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-inventory\") pod \"41db46e0-68d6-435f-a4b8-16a67663eedf\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.041143 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-ssh-key\") pod \"41db46e0-68d6-435f-a4b8-16a67663eedf\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.041190 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcgmv\" (UniqueName: \"kubernetes.io/projected/41db46e0-68d6-435f-a4b8-16a67663eedf-kube-api-access-gcgmv\") pod \"41db46e0-68d6-435f-a4b8-16a67663eedf\" (UID: \"41db46e0-68d6-435f-a4b8-16a67663eedf\") " Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.046628 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41db46e0-68d6-435f-a4b8-16a67663eedf-kube-api-access-gcgmv" (OuterVolumeSpecName: "kube-api-access-gcgmv") pod "41db46e0-68d6-435f-a4b8-16a67663eedf" (UID: "41db46e0-68d6-435f-a4b8-16a67663eedf"). InnerVolumeSpecName "kube-api-access-gcgmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.064912 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "41db46e0-68d6-435f-a4b8-16a67663eedf" (UID: "41db46e0-68d6-435f-a4b8-16a67663eedf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.065715 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-inventory" (OuterVolumeSpecName: "inventory") pod "41db46e0-68d6-435f-a4b8-16a67663eedf" (UID: "41db46e0-68d6-435f-a4b8-16a67663eedf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.143358 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.143390 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41db46e0-68d6-435f-a4b8-16a67663eedf-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.143399 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcgmv\" (UniqueName: \"kubernetes.io/projected/41db46e0-68d6-435f-a4b8-16a67663eedf-kube-api-access-gcgmv\") on node \"crc\" DevicePath \"\"" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.523824 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" event={"ID":"41db46e0-68d6-435f-a4b8-16a67663eedf","Type":"ContainerDied","Data":"ff25547d1a8b30edd296b8d225585e9299bb4f75b8fa3a797fc834507e697c25"} Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.524090 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff25547d1a8b30edd296b8d225585e9299bb4f75b8fa3a797fc834507e697c25" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.523871 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.589596 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz"] Dec 02 10:45:51 crc kubenswrapper[4813]: E1202 10:45:51.590649 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41db46e0-68d6-435f-a4b8-16a67663eedf" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.590746 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="41db46e0-68d6-435f-a4b8-16a67663eedf" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.590985 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="41db46e0-68d6-435f-a4b8-16a67663eedf" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.591684 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.594852 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.596701 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.596808 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.597181 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.610866 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz"] Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.754273 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.754846 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlr4r\" (UniqueName: \"kubernetes.io/projected/1e5ced15-3788-4822-a702-d60d57b5e36f-kube-api-access-nlr4r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.755113 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.856460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.856795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.856892 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlr4r\" (UniqueName: \"kubernetes.io/projected/1e5ced15-3788-4822-a702-d60d57b5e36f-kube-api-access-nlr4r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.860420 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.860420 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.873897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlr4r\" (UniqueName: \"kubernetes.io/projected/1e5ced15-3788-4822-a702-d60d57b5e36f-kube-api-access-nlr4r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:51 crc kubenswrapper[4813]: I1202 10:45:51.909370 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:45:52 crc kubenswrapper[4813]: I1202 10:45:52.068317 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:45:52 crc kubenswrapper[4813]: E1202 10:45:52.068723 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:45:52 crc kubenswrapper[4813]: I1202 10:45:52.078571 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41229fd1-12e2-41db-96d9-ac6349cf5756" path="/var/lib/kubelet/pods/41229fd1-12e2-41db-96d9-ac6349cf5756/volumes" Dec 02 10:45:52 crc kubenswrapper[4813]: I1202 10:45:52.441019 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz"] Dec 02 10:45:52 crc kubenswrapper[4813]: I1202 10:45:52.532419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" event={"ID":"1e5ced15-3788-4822-a702-d60d57b5e36f","Type":"ContainerStarted","Data":"ceb4bce30a677d01bcb876a084010ff79d7d6af11a5751f7ea12e0ca87f3f46f"} Dec 02 10:45:53 crc kubenswrapper[4813]: I1202 10:45:53.542506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" event={"ID":"1e5ced15-3788-4822-a702-d60d57b5e36f","Type":"ContainerStarted","Data":"d616f340caa88c6f9ca96b58c62c12903261a48aaa7dfcf393cc9c936e479f0a"} Dec 02 10:45:53 crc kubenswrapper[4813]: I1202 10:45:53.571793 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" podStartSLOduration=2.068482408 podStartE2EDuration="2.571763603s" podCreationTimestamp="2025-12-02 10:45:51 +0000 UTC" firstStartedPulling="2025-12-02 10:45:52.448587245 +0000 UTC m=+2276.643761557" lastFinishedPulling="2025-12-02 10:45:52.95186845 +0000 UTC m=+2277.147042752" observedRunningTime="2025-12-02 10:45:53.560179141 +0000 UTC m=+2277.755353483" watchObservedRunningTime="2025-12-02 10:45:53.571763603 +0000 UTC m=+2277.766937935" Dec 02 10:45:55 crc kubenswrapper[4813]: I1202 10:45:55.034555 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hz75q"] Dec 02 10:45:55 crc kubenswrapper[4813]: I1202 10:45:55.041705 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-hz75q"] Dec 02 10:45:56 crc kubenswrapper[4813]: I1202 10:45:56.099151 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b7b8eae-da35-4f54-83ec-6343ebedecfa" path="/var/lib/kubelet/pods/5b7b8eae-da35-4f54-83ec-6343ebedecfa/volumes" Dec 02 10:45:58 crc kubenswrapper[4813]: I1202 10:45:58.036040 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2gtvp"] Dec 02 10:45:58 crc kubenswrapper[4813]: I1202 10:45:58.047295 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2gtvp"] Dec 02 10:45:58 crc kubenswrapper[4813]: I1202 10:45:58.079419 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeebb6e7-c26e-421b-ab9c-4b75379601bf" path="/var/lib/kubelet/pods/aeebb6e7-c26e-421b-ab9c-4b75379601bf/volumes" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.544695 4813 scope.go:117] "RemoveContainer" containerID="35f669889e6a0a3f23b168c157de95aee3d09f35e476acd8aaf0334fe490ea58" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.570403 4813 scope.go:117] "RemoveContainer" containerID="b30d81ad79714376ac0557f2827d267b0c16e6e8f01c5a35cb17e92f01b80a5d" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.629499 4813 scope.go:117] "RemoveContainer" containerID="c4bdf3c737d0bd9f7a539259286fff59738b25a59fcb2257d009f32579596237" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.679107 4813 scope.go:117] "RemoveContainer" containerID="e5d5e8e1e619d15222aa3baabff9552377ee414a626e1620c1457160ca4d1abc" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.705553 4813 scope.go:117] "RemoveContainer" containerID="74641adce3de7b5e6d842c96a22e65c9d8c64e3ee25f65e4b837015a0a7a4871" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.740094 4813 scope.go:117] "RemoveContainer" containerID="5d878ce7034685a3374806923d655a3007d562a1681f1713010cc1b8c938a0da" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.781500 4813 scope.go:117] "RemoveContainer" containerID="82a8fc7ae9f37519fc7ebe027c4a5fa667aca583e33dc56a6abbeaea7b1ed252" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.820931 4813 scope.go:117] "RemoveContainer" containerID="8f27dea7fc6bfae55d0759a6fc5425a67b4b47714abc449e80e7343168c169c1" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.840939 4813 scope.go:117] "RemoveContainer" containerID="1e9d4436ab510bb1c15e062d7d96769e04a47894d121b5ae8dae6533050f3016" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.882572 4813 scope.go:117] "RemoveContainer" containerID="75680c4de951d46bc51e1427ef02d21bd99921c4fc734cd6967d1b26ff5fd85e" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.916474 4813 scope.go:117] "RemoveContainer" containerID="4f93d146244abe410fd11e1ff1304257526953280fff9afc549a43ebdbfda79a" Dec 02 10:46:03 crc kubenswrapper[4813]: I1202 10:46:03.961142 4813 scope.go:117] "RemoveContainer" containerID="24b2344a94f10dc1e6be602a5ecccfd78a0b3422886aa38b7fb3105e3bd8afcb" Dec 02 10:46:04 crc kubenswrapper[4813]: I1202 10:46:04.000053 4813 scope.go:117] "RemoveContainer" containerID="6c3cc1d466544b200a35473e7230fb8d848e14b7c37b4d6336bc119416ea6f66" Dec 02 10:46:07 crc kubenswrapper[4813]: I1202 10:46:07.068538 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:46:07 crc kubenswrapper[4813]: E1202 10:46:07.069372 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:46:19 crc kubenswrapper[4813]: I1202 10:46:19.068416 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:46:19 crc kubenswrapper[4813]: E1202 10:46:19.069489 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:46:34 crc kubenswrapper[4813]: I1202 10:46:34.068473 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:46:34 crc kubenswrapper[4813]: E1202 10:46:34.069149 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.081735 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qnt25"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.091151 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d58b-account-create-update-v9r6z"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.105168 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-67g6c"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.112832 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0305-account-create-update-gck59"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.120038 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-whrfm"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.127039 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fec7-account-create-update-ltbqz"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.133038 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0305-account-create-update-gck59"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.141190 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-67g6c"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.149464 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qnt25"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.179889 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-whrfm"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.190408 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fec7-account-create-update-ltbqz"] Dec 02 10:46:37 crc kubenswrapper[4813]: I1202 10:46:37.202559 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d58b-account-create-update-v9r6z"] Dec 02 10:46:38 crc kubenswrapper[4813]: I1202 10:46:38.077841 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40614701-94ba-4e26-bd54-c5f04422c5fa" path="/var/lib/kubelet/pods/40614701-94ba-4e26-bd54-c5f04422c5fa/volumes" Dec 02 10:46:38 crc kubenswrapper[4813]: I1202 10:46:38.078544 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8315d930-9aff-4394-9711-042ed1e4de69" path="/var/lib/kubelet/pods/8315d930-9aff-4394-9711-042ed1e4de69/volumes" Dec 02 10:46:38 crc kubenswrapper[4813]: I1202 10:46:38.079183 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc7289e-4a0c-4273-9d08-89b05ea88cb2" path="/var/lib/kubelet/pods/8cc7289e-4a0c-4273-9d08-89b05ea88cb2/volumes" Dec 02 10:46:38 crc kubenswrapper[4813]: I1202 10:46:38.079825 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21b686f-4092-4e67-9883-a7489038286c" path="/var/lib/kubelet/pods/d21b686f-4092-4e67-9883-a7489038286c/volumes" Dec 02 10:46:38 crc kubenswrapper[4813]: I1202 10:46:38.080838 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f2e281-93f1-4265-9590-d90489b8fb83" path="/var/lib/kubelet/pods/f2f2e281-93f1-4265-9590-d90489b8fb83/volumes" Dec 02 10:46:38 crc kubenswrapper[4813]: I1202 10:46:38.081343 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60b319e-ff22-4ef4-a0b7-02f50ef06c3b" path="/var/lib/kubelet/pods/f60b319e-ff22-4ef4-a0b7-02f50ef06c3b/volumes" Dec 02 10:46:47 crc kubenswrapper[4813]: I1202 10:46:47.068311 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:46:47 crc kubenswrapper[4813]: E1202 10:46:47.069825 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:46:51 crc kubenswrapper[4813]: I1202 10:46:51.090032 4813 generic.go:334] "Generic (PLEG): container finished" podID="1e5ced15-3788-4822-a702-d60d57b5e36f" containerID="d616f340caa88c6f9ca96b58c62c12903261a48aaa7dfcf393cc9c936e479f0a" exitCode=0 Dec 02 10:46:51 crc kubenswrapper[4813]: I1202 10:46:51.090060 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" event={"ID":"1e5ced15-3788-4822-a702-d60d57b5e36f","Type":"ContainerDied","Data":"d616f340caa88c6f9ca96b58c62c12903261a48aaa7dfcf393cc9c936e479f0a"} Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.502181 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.626137 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlr4r\" (UniqueName: \"kubernetes.io/projected/1e5ced15-3788-4822-a702-d60d57b5e36f-kube-api-access-nlr4r\") pod \"1e5ced15-3788-4822-a702-d60d57b5e36f\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.626638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-ssh-key\") pod \"1e5ced15-3788-4822-a702-d60d57b5e36f\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.626771 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-inventory\") pod \"1e5ced15-3788-4822-a702-d60d57b5e36f\" (UID: \"1e5ced15-3788-4822-a702-d60d57b5e36f\") " Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.632482 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5ced15-3788-4822-a702-d60d57b5e36f-kube-api-access-nlr4r" (OuterVolumeSpecName: "kube-api-access-nlr4r") pod "1e5ced15-3788-4822-a702-d60d57b5e36f" (UID: "1e5ced15-3788-4822-a702-d60d57b5e36f"). InnerVolumeSpecName "kube-api-access-nlr4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.670517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e5ced15-3788-4822-a702-d60d57b5e36f" (UID: "1e5ced15-3788-4822-a702-d60d57b5e36f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.679024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-inventory" (OuterVolumeSpecName: "inventory") pod "1e5ced15-3788-4822-a702-d60d57b5e36f" (UID: "1e5ced15-3788-4822-a702-d60d57b5e36f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.729559 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlr4r\" (UniqueName: \"kubernetes.io/projected/1e5ced15-3788-4822-a702-d60d57b5e36f-kube-api-access-nlr4r\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.729600 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:52 crc kubenswrapper[4813]: I1202 10:46:52.729613 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e5ced15-3788-4822-a702-d60d57b5e36f-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.112693 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" event={"ID":"1e5ced15-3788-4822-a702-d60d57b5e36f","Type":"ContainerDied","Data":"ceb4bce30a677d01bcb876a084010ff79d7d6af11a5751f7ea12e0ca87f3f46f"} Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.112765 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceb4bce30a677d01bcb876a084010ff79d7d6af11a5751f7ea12e0ca87f3f46f" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.112855 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.204428 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xdj9g"] Dec 02 10:46:53 crc kubenswrapper[4813]: E1202 10:46:53.204807 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5ced15-3788-4822-a702-d60d57b5e36f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.204826 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5ced15-3788-4822-a702-d60d57b5e36f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.205015 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5ced15-3788-4822-a702-d60d57b5e36f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.205707 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.207702 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.208230 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.208947 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.208973 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.222332 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xdj9g"] Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.341522 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.341666 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.341747 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htkb\" (UniqueName: \"kubernetes.io/projected/1d51de72-2e56-41e2-94da-1faf74e0b3c3-kube-api-access-6htkb\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.443547 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.443607 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6htkb\" (UniqueName: \"kubernetes.io/projected/1d51de72-2e56-41e2-94da-1faf74e0b3c3-kube-api-access-6htkb\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.443721 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.450135 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.450422 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.462241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6htkb\" (UniqueName: \"kubernetes.io/projected/1d51de72-2e56-41e2-94da-1faf74e0b3c3-kube-api-access-6htkb\") pod \"ssh-known-hosts-edpm-deployment-xdj9g\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:53 crc kubenswrapper[4813]: I1202 10:46:53.525616 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:46:54 crc kubenswrapper[4813]: I1202 10:46:54.048007 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xdj9g"] Dec 02 10:46:54 crc kubenswrapper[4813]: I1202 10:46:54.124413 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" event={"ID":"1d51de72-2e56-41e2-94da-1faf74e0b3c3","Type":"ContainerStarted","Data":"936865f75866388b8af5c241eb479d964389d8514fe63ceda6f2b4b8957eb12e"} Dec 02 10:46:56 crc kubenswrapper[4813]: I1202 10:46:56.148976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" event={"ID":"1d51de72-2e56-41e2-94da-1faf74e0b3c3","Type":"ContainerStarted","Data":"b8a5961b015572826b1cf308ad92161b72aa80273dc51a1dd5322ba4229c23fb"} Dec 02 10:47:01 crc kubenswrapper[4813]: I1202 10:47:01.068728 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:47:01 crc kubenswrapper[4813]: E1202 10:47:01.070565 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:47:03 crc kubenswrapper[4813]: I1202 10:47:03.209602 4813 generic.go:334] "Generic (PLEG): container finished" podID="1d51de72-2e56-41e2-94da-1faf74e0b3c3" containerID="b8a5961b015572826b1cf308ad92161b72aa80273dc51a1dd5322ba4229c23fb" exitCode=0 Dec 02 10:47:03 crc kubenswrapper[4813]: I1202 10:47:03.209640 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" event={"ID":"1d51de72-2e56-41e2-94da-1faf74e0b3c3","Type":"ContainerDied","Data":"b8a5961b015572826b1cf308ad92161b72aa80273dc51a1dd5322ba4229c23fb"} Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.082515 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mhjsh"] Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.082803 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mhjsh"] Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.209601 4813 scope.go:117] "RemoveContainer" containerID="57c01f7a0e990127dcb4d6408f1c8010a68abc9dae55def79f90097b280ade5e" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.250772 4813 scope.go:117] "RemoveContainer" containerID="95fe92e373406952c5b80e6bcb0d675d66f9f6131deadd4a7d8ae4f9655b3555" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.289593 4813 scope.go:117] "RemoveContainer" containerID="bfd85f6b28b7a89527f1976baac297dffd44b7ee257bc88e4eb2fe78d1c5b0c7" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.342546 4813 scope.go:117] "RemoveContainer" containerID="94c2ae9ce4c805a17f3d25cb9a9ad16a2dab3bf460058bae8aa9ebbfbcf9d2e4" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.386087 4813 scope.go:117] "RemoveContainer" containerID="5cc045b2d1289035698962d5a53f9b79f283acdcab045cc1b94422ba8e96970f" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.429532 4813 scope.go:117] "RemoveContainer" containerID="6be1abe79456acd0225b1bd4877db818029e8822f156c4d7c872ca38fa8a1368" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.562366 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.654614 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-ssh-key-openstack-edpm-ipam\") pod \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.654823 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6htkb\" (UniqueName: \"kubernetes.io/projected/1d51de72-2e56-41e2-94da-1faf74e0b3c3-kube-api-access-6htkb\") pod \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.654934 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-inventory-0\") pod \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\" (UID: \"1d51de72-2e56-41e2-94da-1faf74e0b3c3\") " Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.659907 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d51de72-2e56-41e2-94da-1faf74e0b3c3-kube-api-access-6htkb" (OuterVolumeSpecName: "kube-api-access-6htkb") pod "1d51de72-2e56-41e2-94da-1faf74e0b3c3" (UID: "1d51de72-2e56-41e2-94da-1faf74e0b3c3"). InnerVolumeSpecName "kube-api-access-6htkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.678702 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1d51de72-2e56-41e2-94da-1faf74e0b3c3" (UID: "1d51de72-2e56-41e2-94da-1faf74e0b3c3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.679140 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1d51de72-2e56-41e2-94da-1faf74e0b3c3" (UID: "1d51de72-2e56-41e2-94da-1faf74e0b3c3"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.756514 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.756547 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6htkb\" (UniqueName: \"kubernetes.io/projected/1d51de72-2e56-41e2-94da-1faf74e0b3c3-kube-api-access-6htkb\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:04 crc kubenswrapper[4813]: I1202 10:47:04.756557 4813 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1d51de72-2e56-41e2-94da-1faf74e0b3c3-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.228575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" event={"ID":"1d51de72-2e56-41e2-94da-1faf74e0b3c3","Type":"ContainerDied","Data":"936865f75866388b8af5c241eb479d964389d8514fe63ceda6f2b4b8957eb12e"} Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.228823 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936865f75866388b8af5c241eb479d964389d8514fe63ceda6f2b4b8957eb12e" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.228876 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xdj9g" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.319702 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm"] Dec 02 10:47:05 crc kubenswrapper[4813]: E1202 10:47:05.320042 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d51de72-2e56-41e2-94da-1faf74e0b3c3" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.320052 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d51de72-2e56-41e2-94da-1faf74e0b3c3" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.320253 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d51de72-2e56-41e2-94da-1faf74e0b3c3" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.320778 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.322623 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.323658 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.324127 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.329067 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.335053 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm"] Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.468680 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.468749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rnp7\" (UniqueName: \"kubernetes.io/projected/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-kube-api-access-7rnp7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.468996 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.571411 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.571597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.571625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rnp7\" (UniqueName: \"kubernetes.io/projected/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-kube-api-access-7rnp7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.576729 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.593028 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.609948 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rnp7\" (UniqueName: \"kubernetes.io/projected/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-kube-api-access-7rnp7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qrpmm\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:05 crc kubenswrapper[4813]: I1202 10:47:05.640177 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:06 crc kubenswrapper[4813]: I1202 10:47:06.086635 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc60b93-1ed9-43f7-9aa7-20d830692b2a" path="/var/lib/kubelet/pods/3cc60b93-1ed9-43f7-9aa7-20d830692b2a/volumes" Dec 02 10:47:06 crc kubenswrapper[4813]: I1202 10:47:06.180727 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm"] Dec 02 10:47:06 crc kubenswrapper[4813]: I1202 10:47:06.245265 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" event={"ID":"f6e81a7d-ce65-4039-8a52-cfc250d66c0b","Type":"ContainerStarted","Data":"44099df07deae525aac8b8a08ee547b93ac6a278194386860c818f207c3fd5b4"} Dec 02 10:47:07 crc kubenswrapper[4813]: I1202 10:47:07.256741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" event={"ID":"f6e81a7d-ce65-4039-8a52-cfc250d66c0b","Type":"ContainerStarted","Data":"1e25d543fddf6c417fa3da569dc503d434569bd2691b84d1cd2f76a0d4cf3528"} Dec 02 10:47:07 crc kubenswrapper[4813]: I1202 10:47:07.285671 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" podStartSLOduration=1.843938673 podStartE2EDuration="2.285645266s" podCreationTimestamp="2025-12-02 10:47:05 +0000 UTC" firstStartedPulling="2025-12-02 10:47:06.188899966 +0000 UTC m=+2350.384074278" lastFinishedPulling="2025-12-02 10:47:06.630606569 +0000 UTC m=+2350.825780871" observedRunningTime="2025-12-02 10:47:07.280791308 +0000 UTC m=+2351.475965610" watchObservedRunningTime="2025-12-02 10:47:07.285645266 +0000 UTC m=+2351.480819608" Dec 02 10:47:13 crc kubenswrapper[4813]: I1202 10:47:13.068398 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:47:13 crc kubenswrapper[4813]: E1202 10:47:13.069344 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:47:16 crc kubenswrapper[4813]: I1202 10:47:16.346316 4813 generic.go:334] "Generic (PLEG): container finished" podID="f6e81a7d-ce65-4039-8a52-cfc250d66c0b" containerID="1e25d543fddf6c417fa3da569dc503d434569bd2691b84d1cd2f76a0d4cf3528" exitCode=0 Dec 02 10:47:16 crc kubenswrapper[4813]: I1202 10:47:16.346401 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" event={"ID":"f6e81a7d-ce65-4039-8a52-cfc250d66c0b","Type":"ContainerDied","Data":"1e25d543fddf6c417fa3da569dc503d434569bd2691b84d1cd2f76a0d4cf3528"} Dec 02 10:47:17 crc kubenswrapper[4813]: I1202 10:47:17.880820 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.011045 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-inventory\") pod \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.011272 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-ssh-key\") pod \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.011299 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rnp7\" (UniqueName: \"kubernetes.io/projected/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-kube-api-access-7rnp7\") pod \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\" (UID: \"f6e81a7d-ce65-4039-8a52-cfc250d66c0b\") " Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.017380 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-kube-api-access-7rnp7" (OuterVolumeSpecName: "kube-api-access-7rnp7") pod "f6e81a7d-ce65-4039-8a52-cfc250d66c0b" (UID: "f6e81a7d-ce65-4039-8a52-cfc250d66c0b"). InnerVolumeSpecName "kube-api-access-7rnp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.037867 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f6e81a7d-ce65-4039-8a52-cfc250d66c0b" (UID: "f6e81a7d-ce65-4039-8a52-cfc250d66c0b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.042004 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-inventory" (OuterVolumeSpecName: "inventory") pod "f6e81a7d-ce65-4039-8a52-cfc250d66c0b" (UID: "f6e81a7d-ce65-4039-8a52-cfc250d66c0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.113628 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.113652 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rnp7\" (UniqueName: \"kubernetes.io/projected/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-kube-api-access-7rnp7\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.113662 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e81a7d-ce65-4039-8a52-cfc250d66c0b-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.367765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" event={"ID":"f6e81a7d-ce65-4039-8a52-cfc250d66c0b","Type":"ContainerDied","Data":"44099df07deae525aac8b8a08ee547b93ac6a278194386860c818f207c3fd5b4"} Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.367822 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44099df07deae525aac8b8a08ee547b93ac6a278194386860c818f207c3fd5b4" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.367855 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.457576 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d"] Dec 02 10:47:18 crc kubenswrapper[4813]: E1202 10:47:18.458011 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e81a7d-ce65-4039-8a52-cfc250d66c0b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.458034 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e81a7d-ce65-4039-8a52-cfc250d66c0b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.458285 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e81a7d-ce65-4039-8a52-cfc250d66c0b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.458883 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.461416 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.461998 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.463017 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.466337 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.480672 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d"] Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.622019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/17013aed-1ecc-4f3f-b160-a0aef30d713c-kube-api-access-zc9pb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.622125 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.622217 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.724560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.724725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.724978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/17013aed-1ecc-4f3f-b160-a0aef30d713c-kube-api-access-zc9pb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.730093 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.730880 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.754491 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/17013aed-1ecc-4f3f-b160-a0aef30d713c-kube-api-access-zc9pb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:18 crc kubenswrapper[4813]: I1202 10:47:18.778710 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:19 crc kubenswrapper[4813]: I1202 10:47:19.358559 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d"] Dec 02 10:47:19 crc kubenswrapper[4813]: I1202 10:47:19.375778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" event={"ID":"17013aed-1ecc-4f3f-b160-a0aef30d713c","Type":"ContainerStarted","Data":"a82f62f217bc3e7ac29b03348aeb587357984cd08221d28a0b5408f4f9708a73"} Dec 02 10:47:22 crc kubenswrapper[4813]: I1202 10:47:22.409031 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" event={"ID":"17013aed-1ecc-4f3f-b160-a0aef30d713c","Type":"ContainerStarted","Data":"0f0445d9371f3d0f18fcb2de73e3a91361e50bc4a67408955f83a67c266f8ae5"} Dec 02 10:47:22 crc kubenswrapper[4813]: I1202 10:47:22.426134 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" podStartSLOduration=2.360795932 podStartE2EDuration="4.426046308s" podCreationTimestamp="2025-12-02 10:47:18 +0000 UTC" firstStartedPulling="2025-12-02 10:47:19.366124463 +0000 UTC m=+2363.561298755" lastFinishedPulling="2025-12-02 10:47:21.431374809 +0000 UTC m=+2365.626549131" observedRunningTime="2025-12-02 10:47:22.421643262 +0000 UTC m=+2366.616817574" watchObservedRunningTime="2025-12-02 10:47:22.426046308 +0000 UTC m=+2366.621220620" Dec 02 10:47:26 crc kubenswrapper[4813]: I1202 10:47:26.082163 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:47:26 crc kubenswrapper[4813]: E1202 10:47:26.083028 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:47:27 crc kubenswrapper[4813]: I1202 10:47:27.063464 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bc7vv"] Dec 02 10:47:27 crc kubenswrapper[4813]: I1202 10:47:27.078012 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bc7vv"] Dec 02 10:47:28 crc kubenswrapper[4813]: I1202 10:47:28.035970 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9g8dx"] Dec 02 10:47:28 crc kubenswrapper[4813]: I1202 10:47:28.045368 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9g8dx"] Dec 02 10:47:28 crc kubenswrapper[4813]: I1202 10:47:28.081233 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02700b55-38cc-4aeb-b1a2-3e0820835639" path="/var/lib/kubelet/pods/02700b55-38cc-4aeb-b1a2-3e0820835639/volumes" Dec 02 10:47:28 crc kubenswrapper[4813]: I1202 10:47:28.081747 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eb91674-4da6-449f-a496-894a0210963b" path="/var/lib/kubelet/pods/5eb91674-4da6-449f-a496-894a0210963b/volumes" Dec 02 10:47:32 crc kubenswrapper[4813]: I1202 10:47:32.507439 4813 generic.go:334] "Generic (PLEG): container finished" podID="17013aed-1ecc-4f3f-b160-a0aef30d713c" containerID="0f0445d9371f3d0f18fcb2de73e3a91361e50bc4a67408955f83a67c266f8ae5" exitCode=0 Dec 02 10:47:32 crc kubenswrapper[4813]: I1202 10:47:32.507530 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" event={"ID":"17013aed-1ecc-4f3f-b160-a0aef30d713c","Type":"ContainerDied","Data":"0f0445d9371f3d0f18fcb2de73e3a91361e50bc4a67408955f83a67c266f8ae5"} Dec 02 10:47:33 crc kubenswrapper[4813]: I1202 10:47:33.980105 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.088014 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-inventory\") pod \"17013aed-1ecc-4f3f-b160-a0aef30d713c\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.088149 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-ssh-key\") pod \"17013aed-1ecc-4f3f-b160-a0aef30d713c\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.088356 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/17013aed-1ecc-4f3f-b160-a0aef30d713c-kube-api-access-zc9pb\") pod \"17013aed-1ecc-4f3f-b160-a0aef30d713c\" (UID: \"17013aed-1ecc-4f3f-b160-a0aef30d713c\") " Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.093664 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17013aed-1ecc-4f3f-b160-a0aef30d713c-kube-api-access-zc9pb" (OuterVolumeSpecName: "kube-api-access-zc9pb") pod "17013aed-1ecc-4f3f-b160-a0aef30d713c" (UID: "17013aed-1ecc-4f3f-b160-a0aef30d713c"). InnerVolumeSpecName "kube-api-access-zc9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.123833 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17013aed-1ecc-4f3f-b160-a0aef30d713c" (UID: "17013aed-1ecc-4f3f-b160-a0aef30d713c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.134948 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-inventory" (OuterVolumeSpecName: "inventory") pod "17013aed-1ecc-4f3f-b160-a0aef30d713c" (UID: "17013aed-1ecc-4f3f-b160-a0aef30d713c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.191228 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9pb\" (UniqueName: \"kubernetes.io/projected/17013aed-1ecc-4f3f-b160-a0aef30d713c-kube-api-access-zc9pb\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.191587 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.191603 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17013aed-1ecc-4f3f-b160-a0aef30d713c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.535488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" event={"ID":"17013aed-1ecc-4f3f-b160-a0aef30d713c","Type":"ContainerDied","Data":"a82f62f217bc3e7ac29b03348aeb587357984cd08221d28a0b5408f4f9708a73"} Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.535546 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a82f62f217bc3e7ac29b03348aeb587357984cd08221d28a0b5408f4f9708a73" Dec 02 10:47:34 crc kubenswrapper[4813]: I1202 10:47:34.536034 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d" Dec 02 10:47:41 crc kubenswrapper[4813]: I1202 10:47:41.068414 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:47:41 crc kubenswrapper[4813]: E1202 10:47:41.069421 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:47:52 crc kubenswrapper[4813]: I1202 10:47:52.067976 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:47:52 crc kubenswrapper[4813]: E1202 10:47:52.068869 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:48:04 crc kubenswrapper[4813]: I1202 10:48:04.068394 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:48:04 crc kubenswrapper[4813]: E1202 10:48:04.069247 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:48:04 crc kubenswrapper[4813]: I1202 10:48:04.583990 4813 scope.go:117] "RemoveContainer" containerID="8d5be43539082a95ae82bd8f287905847e1d01aa3b955e6e23d75d32c207367c" Dec 02 10:48:04 crc kubenswrapper[4813]: I1202 10:48:04.659155 4813 scope.go:117] "RemoveContainer" containerID="38dcc9fbc5e7c99a2648f507a2af606d847d7b30a6f97b4183abe57a0231dd5f" Dec 02 10:48:04 crc kubenswrapper[4813]: I1202 10:48:04.704479 4813 scope.go:117] "RemoveContainer" containerID="4935a0a006ad6c988b8fde8d27c4ac0537508353b4cd4dcf95a60a47b82beddb" Dec 02 10:48:12 crc kubenswrapper[4813]: I1202 10:48:12.052963 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sj6rf"] Dec 02 10:48:12 crc kubenswrapper[4813]: I1202 10:48:12.065022 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sj6rf"] Dec 02 10:48:12 crc kubenswrapper[4813]: I1202 10:48:12.087637 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f99d60d-1399-4a43-9711-01a53a17257a" path="/var/lib/kubelet/pods/3f99d60d-1399-4a43-9711-01a53a17257a/volumes" Dec 02 10:48:15 crc kubenswrapper[4813]: I1202 10:48:15.069585 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:48:15 crc kubenswrapper[4813]: E1202 10:48:15.070232 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:48:26 crc kubenswrapper[4813]: I1202 10:48:26.074780 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:48:26 crc kubenswrapper[4813]: E1202 10:48:26.076005 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:48:39 crc kubenswrapper[4813]: I1202 10:48:39.069026 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:48:39 crc kubenswrapper[4813]: E1202 10:48:39.070134 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:48:52 crc kubenswrapper[4813]: I1202 10:48:52.068936 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:48:52 crc kubenswrapper[4813]: E1202 10:48:52.069965 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:49:04 crc kubenswrapper[4813]: I1202 10:49:04.815789 4813 scope.go:117] "RemoveContainer" containerID="2ccebba788d3fd3ed6e792f77b123dee7ca1dbc3825a5fd36e979145b1a27552" Dec 02 10:49:07 crc kubenswrapper[4813]: I1202 10:49:07.068288 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:49:07 crc kubenswrapper[4813]: E1202 10:49:07.069039 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:49:20 crc kubenswrapper[4813]: I1202 10:49:20.067719 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:49:20 crc kubenswrapper[4813]: E1202 10:49:20.068931 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:49:34 crc kubenswrapper[4813]: I1202 10:49:34.067536 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:49:34 crc kubenswrapper[4813]: E1202 10:49:34.068183 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:49:47 crc kubenswrapper[4813]: I1202 10:49:47.068523 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:49:47 crc kubenswrapper[4813]: I1202 10:49:47.857313 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"cbee9828e642b95594b6a1d54e70cf903aefb8fe393ade18728a5f77245eed79"} Dec 02 10:50:31 crc kubenswrapper[4813]: I1202 10:50:31.031043 4813 patch_prober.go:28] interesting pod/console-operator-58897d9998-mbprt container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 10:50:31 crc kubenswrapper[4813]: I1202 10:50:31.031632 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-mbprt" podUID="a917dd4e-95f4-4b15-93f3-d7555f527969" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 10:50:31 crc kubenswrapper[4813]: I1202 10:50:31.031055 4813 patch_prober.go:28] interesting pod/console-operator-58897d9998-mbprt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 10:50:31 crc kubenswrapper[4813]: I1202 10:50:31.031684 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mbprt" podUID="a917dd4e-95f4-4b15-93f3-d7555f527969" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.503176 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fkqp5"] Dec 02 10:51:29 crc kubenswrapper[4813]: E1202 10:51:29.504160 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17013aed-1ecc-4f3f-b160-a0aef30d713c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.504177 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="17013aed-1ecc-4f3f-b160-a0aef30d713c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.504344 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="17013aed-1ecc-4f3f-b160-a0aef30d713c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.506527 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.534293 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkqp5"] Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.671525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7aa9d6-692d-4237-8923-b947d4fab022-utilities\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.671628 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq2b6\" (UniqueName: \"kubernetes.io/projected/1e7aa9d6-692d-4237-8923-b947d4fab022-kube-api-access-fq2b6\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.671673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7aa9d6-692d-4237-8923-b947d4fab022-catalog-content\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.773006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq2b6\" (UniqueName: \"kubernetes.io/projected/1e7aa9d6-692d-4237-8923-b947d4fab022-kube-api-access-fq2b6\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.773108 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7aa9d6-692d-4237-8923-b947d4fab022-catalog-content\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.773190 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7aa9d6-692d-4237-8923-b947d4fab022-utilities\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.773695 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7aa9d6-692d-4237-8923-b947d4fab022-utilities\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.773840 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7aa9d6-692d-4237-8923-b947d4fab022-catalog-content\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.800447 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq2b6\" (UniqueName: \"kubernetes.io/projected/1e7aa9d6-692d-4237-8923-b947d4fab022-kube-api-access-fq2b6\") pod \"community-operators-fkqp5\" (UID: \"1e7aa9d6-692d-4237-8923-b947d4fab022\") " pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:29 crc kubenswrapper[4813]: I1202 10:51:29.829914 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:30 crc kubenswrapper[4813]: I1202 10:51:30.382228 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkqp5"] Dec 02 10:51:30 crc kubenswrapper[4813]: I1202 10:51:30.795357 4813 generic.go:334] "Generic (PLEG): container finished" podID="1e7aa9d6-692d-4237-8923-b947d4fab022" containerID="45ee7fc561207bead48b79bb98a9d83a72475b32b94407709e5c6d6afcc07024" exitCode=0 Dec 02 10:51:30 crc kubenswrapper[4813]: I1202 10:51:30.795456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkqp5" event={"ID":"1e7aa9d6-692d-4237-8923-b947d4fab022","Type":"ContainerDied","Data":"45ee7fc561207bead48b79bb98a9d83a72475b32b94407709e5c6d6afcc07024"} Dec 02 10:51:30 crc kubenswrapper[4813]: I1202 10:51:30.795722 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkqp5" event={"ID":"1e7aa9d6-692d-4237-8923-b947d4fab022","Type":"ContainerStarted","Data":"aaeee23f9c98c4d7691a96e8c7a11ffcea6edf754f1942c0de5098f2f6491f2f"} Dec 02 10:51:30 crc kubenswrapper[4813]: I1202 10:51:30.799044 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:51:34 crc kubenswrapper[4813]: I1202 10:51:34.838779 4813 generic.go:334] "Generic (PLEG): container finished" podID="1e7aa9d6-692d-4237-8923-b947d4fab022" containerID="4bd8a0d6cdbfbd0e2c17c0aeb887eba1131036de3352f29e8505679a514460f7" exitCode=0 Dec 02 10:51:34 crc kubenswrapper[4813]: I1202 10:51:34.838895 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkqp5" event={"ID":"1e7aa9d6-692d-4237-8923-b947d4fab022","Type":"ContainerDied","Data":"4bd8a0d6cdbfbd0e2c17c0aeb887eba1131036de3352f29e8505679a514460f7"} Dec 02 10:51:36 crc kubenswrapper[4813]: I1202 10:51:36.856859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkqp5" event={"ID":"1e7aa9d6-692d-4237-8923-b947d4fab022","Type":"ContainerStarted","Data":"c630e1384c4b6cfad9be42c1740047d7b131483f3d8cff17ebb4b3e989d6613f"} Dec 02 10:51:36 crc kubenswrapper[4813]: I1202 10:51:36.889349 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fkqp5" podStartSLOduration=2.724793519 podStartE2EDuration="7.889330816s" podCreationTimestamp="2025-12-02 10:51:29 +0000 UTC" firstStartedPulling="2025-12-02 10:51:30.7987496 +0000 UTC m=+2614.993923902" lastFinishedPulling="2025-12-02 10:51:35.963286887 +0000 UTC m=+2620.158461199" observedRunningTime="2025-12-02 10:51:36.882011302 +0000 UTC m=+2621.077185604" watchObservedRunningTime="2025-12-02 10:51:36.889330816 +0000 UTC m=+2621.084505118" Dec 02 10:51:39 crc kubenswrapper[4813]: I1202 10:51:39.830446 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:39 crc kubenswrapper[4813]: I1202 10:51:39.830997 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:39 crc kubenswrapper[4813]: I1202 10:51:39.882348 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:49 crc kubenswrapper[4813]: I1202 10:51:49.882751 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fkqp5" Dec 02 10:51:49 crc kubenswrapper[4813]: I1202 10:51:49.949145 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkqp5"] Dec 02 10:51:49 crc kubenswrapper[4813]: I1202 10:51:49.989393 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bjh7d"] Dec 02 10:51:49 crc kubenswrapper[4813]: I1202 10:51:49.989639 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bjh7d" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerName="registry-server" containerID="cri-o://ca2e3d8bc7ad8e123b7784e80170dac44ff8850b0296fbdc254b0004c1ca6f4b" gracePeriod=2 Dec 02 10:51:50 crc kubenswrapper[4813]: I1202 10:51:50.986908 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerID="ca2e3d8bc7ad8e123b7784e80170dac44ff8850b0296fbdc254b0004c1ca6f4b" exitCode=0 Dec 02 10:51:50 crc kubenswrapper[4813]: I1202 10:51:50.988248 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjh7d" event={"ID":"b3628f43-b85b-4527-831c-fcb40e1755cd","Type":"ContainerDied","Data":"ca2e3d8bc7ad8e123b7784e80170dac44ff8850b0296fbdc254b0004c1ca6f4b"} Dec 02 10:51:50 crc kubenswrapper[4813]: I1202 10:51:50.988358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjh7d" event={"ID":"b3628f43-b85b-4527-831c-fcb40e1755cd","Type":"ContainerDied","Data":"3bab5811c8d9988a7d83a57db38a3dab98f8a9d7c9b1863ddeb20b3a7a5ef94e"} Dec 02 10:51:50 crc kubenswrapper[4813]: I1202 10:51:50.988425 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bab5811c8d9988a7d83a57db38a3dab98f8a9d7c9b1863ddeb20b3a7a5ef94e" Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.028332 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.167336 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-utilities\") pod \"b3628f43-b85b-4527-831c-fcb40e1755cd\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.167407 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-catalog-content\") pod \"b3628f43-b85b-4527-831c-fcb40e1755cd\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.167549 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4drln\" (UniqueName: \"kubernetes.io/projected/b3628f43-b85b-4527-831c-fcb40e1755cd-kube-api-access-4drln\") pod \"b3628f43-b85b-4527-831c-fcb40e1755cd\" (UID: \"b3628f43-b85b-4527-831c-fcb40e1755cd\") " Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.167948 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-utilities" (OuterVolumeSpecName: "utilities") pod "b3628f43-b85b-4527-831c-fcb40e1755cd" (UID: "b3628f43-b85b-4527-831c-fcb40e1755cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.169131 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.175523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3628f43-b85b-4527-831c-fcb40e1755cd-kube-api-access-4drln" (OuterVolumeSpecName: "kube-api-access-4drln") pod "b3628f43-b85b-4527-831c-fcb40e1755cd" (UID: "b3628f43-b85b-4527-831c-fcb40e1755cd"). InnerVolumeSpecName "kube-api-access-4drln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.210412 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3628f43-b85b-4527-831c-fcb40e1755cd" (UID: "b3628f43-b85b-4527-831c-fcb40e1755cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.270568 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3628f43-b85b-4527-831c-fcb40e1755cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:51:51 crc kubenswrapper[4813]: I1202 10:51:51.270607 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4drln\" (UniqueName: \"kubernetes.io/projected/b3628f43-b85b-4527-831c-fcb40e1755cd-kube-api-access-4drln\") on node \"crc\" DevicePath \"\"" Dec 02 10:51:52 crc kubenswrapper[4813]: I1202 10:51:52.009160 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjh7d" Dec 02 10:51:52 crc kubenswrapper[4813]: I1202 10:51:52.038258 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bjh7d"] Dec 02 10:51:52 crc kubenswrapper[4813]: I1202 10:51:52.045926 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bjh7d"] Dec 02 10:51:52 crc kubenswrapper[4813]: I1202 10:51:52.078976 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" path="/var/lib/kubelet/pods/b3628f43-b85b-4527-831c-fcb40e1755cd/volumes" Dec 02 10:52:04 crc kubenswrapper[4813]: I1202 10:52:04.274117 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:52:04 crc kubenswrapper[4813]: I1202 10:52:04.274807 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:52:04 crc kubenswrapper[4813]: I1202 10:52:04.955003 4813 scope.go:117] "RemoveContainer" containerID="ca2e3d8bc7ad8e123b7784e80170dac44ff8850b0296fbdc254b0004c1ca6f4b" Dec 02 10:52:04 crc kubenswrapper[4813]: I1202 10:52:04.986870 4813 scope.go:117] "RemoveContainer" containerID="6f9734bc6e997a93205b4c087421e337b831cf5e9eee7e22c7c57e853e66a70c" Dec 02 10:52:05 crc kubenswrapper[4813]: I1202 10:52:05.020902 4813 scope.go:117] "RemoveContainer" containerID="095437fd8698efc75b2af74d8f89130e5c76a3a91ace88aa265511324e87dd64" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.581893 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5lxs"] Dec 02 10:52:25 crc kubenswrapper[4813]: E1202 10:52:25.583046 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerName="extract-utilities" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.583067 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerName="extract-utilities" Dec 02 10:52:25 crc kubenswrapper[4813]: E1202 10:52:25.583140 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerName="registry-server" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.583152 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerName="registry-server" Dec 02 10:52:25 crc kubenswrapper[4813]: E1202 10:52:25.583187 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerName="extract-content" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.583197 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerName="extract-content" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.583462 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3628f43-b85b-4527-831c-fcb40e1755cd" containerName="registry-server" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.585537 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.592386 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5lxs"] Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.763110 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-catalog-content\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.763630 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v877x\" (UniqueName: \"kubernetes.io/projected/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-kube-api-access-v877x\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.763796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-utilities\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.864987 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-utilities\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.865142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-catalog-content\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.865200 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v877x\" (UniqueName: \"kubernetes.io/projected/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-kube-api-access-v877x\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.865634 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-utilities\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.865676 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-catalog-content\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.895006 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v877x\" (UniqueName: \"kubernetes.io/projected/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-kube-api-access-v877x\") pod \"redhat-marketplace-l5lxs\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:25 crc kubenswrapper[4813]: I1202 10:52:25.912407 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:26 crc kubenswrapper[4813]: I1202 10:52:26.379760 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5lxs"] Dec 02 10:52:27 crc kubenswrapper[4813]: E1202 10:52:27.258449 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:52890->38.102.83.145:38861: write tcp 38.102.83.145:52890->38.102.83.145:38861: write: broken pipe Dec 02 10:52:27 crc kubenswrapper[4813]: I1202 10:52:27.343434 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerID="1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276" exitCode=0 Dec 02 10:52:27 crc kubenswrapper[4813]: I1202 10:52:27.343497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5lxs" event={"ID":"aa14546f-d696-4b0e-bd85-a9368cc2d1bd","Type":"ContainerDied","Data":"1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276"} Dec 02 10:52:27 crc kubenswrapper[4813]: I1202 10:52:27.343551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5lxs" event={"ID":"aa14546f-d696-4b0e-bd85-a9368cc2d1bd","Type":"ContainerStarted","Data":"22ae4dbbd865939d5d304039a7fe87d8bb06c87da2bab90171f34e1eafcb3223"} Dec 02 10:52:30 crc kubenswrapper[4813]: I1202 10:52:30.373619 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerID="a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617" exitCode=0 Dec 02 10:52:30 crc kubenswrapper[4813]: I1202 10:52:30.373699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5lxs" event={"ID":"aa14546f-d696-4b0e-bd85-a9368cc2d1bd","Type":"ContainerDied","Data":"a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617"} Dec 02 10:52:31 crc kubenswrapper[4813]: I1202 10:52:31.388805 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5lxs" event={"ID":"aa14546f-d696-4b0e-bd85-a9368cc2d1bd","Type":"ContainerStarted","Data":"d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928"} Dec 02 10:52:31 crc kubenswrapper[4813]: I1202 10:52:31.413554 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5lxs" podStartSLOduration=2.883846721 podStartE2EDuration="6.413530129s" podCreationTimestamp="2025-12-02 10:52:25 +0000 UTC" firstStartedPulling="2025-12-02 10:52:27.345096805 +0000 UTC m=+2671.540271107" lastFinishedPulling="2025-12-02 10:52:30.874780213 +0000 UTC m=+2675.069954515" observedRunningTime="2025-12-02 10:52:31.410618498 +0000 UTC m=+2675.605792800" watchObservedRunningTime="2025-12-02 10:52:31.413530129 +0000 UTC m=+2675.608704461" Dec 02 10:52:34 crc kubenswrapper[4813]: I1202 10:52:34.274045 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:52:34 crc kubenswrapper[4813]: I1202 10:52:34.274642 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:52:35 crc kubenswrapper[4813]: I1202 10:52:35.913378 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:35 crc kubenswrapper[4813]: I1202 10:52:35.913438 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:35 crc kubenswrapper[4813]: I1202 10:52:35.963095 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:36 crc kubenswrapper[4813]: I1202 10:52:36.513327 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:36 crc kubenswrapper[4813]: I1202 10:52:36.572191 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5lxs"] Dec 02 10:52:37 crc kubenswrapper[4813]: E1202 10:52:37.310528 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:57098->38.102.83.145:38861: write tcp 38.102.83.145:57098->38.102.83.145:38861: write: broken pipe Dec 02 10:52:38 crc kubenswrapper[4813]: I1202 10:52:38.472783 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5lxs" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerName="registry-server" containerID="cri-o://d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928" gracePeriod=2 Dec 02 10:52:38 crc kubenswrapper[4813]: I1202 10:52:38.950741 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.134416 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-catalog-content\") pod \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.134473 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v877x\" (UniqueName: \"kubernetes.io/projected/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-kube-api-access-v877x\") pod \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.134535 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-utilities\") pod \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\" (UID: \"aa14546f-d696-4b0e-bd85-a9368cc2d1bd\") " Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.135671 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-utilities" (OuterVolumeSpecName: "utilities") pod "aa14546f-d696-4b0e-bd85-a9368cc2d1bd" (UID: "aa14546f-d696-4b0e-bd85-a9368cc2d1bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.145545 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-kube-api-access-v877x" (OuterVolumeSpecName: "kube-api-access-v877x") pod "aa14546f-d696-4b0e-bd85-a9368cc2d1bd" (UID: "aa14546f-d696-4b0e-bd85-a9368cc2d1bd"). InnerVolumeSpecName "kube-api-access-v877x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.155586 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa14546f-d696-4b0e-bd85-a9368cc2d1bd" (UID: "aa14546f-d696-4b0e-bd85-a9368cc2d1bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.236378 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.236702 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v877x\" (UniqueName: \"kubernetes.io/projected/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-kube-api-access-v877x\") on node \"crc\" DevicePath \"\"" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.236715 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa14546f-d696-4b0e-bd85-a9368cc2d1bd-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.483454 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerID="d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928" exitCode=0 Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.483494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5lxs" event={"ID":"aa14546f-d696-4b0e-bd85-a9368cc2d1bd","Type":"ContainerDied","Data":"d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928"} Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.483542 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5lxs" event={"ID":"aa14546f-d696-4b0e-bd85-a9368cc2d1bd","Type":"ContainerDied","Data":"22ae4dbbd865939d5d304039a7fe87d8bb06c87da2bab90171f34e1eafcb3223"} Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.483566 4813 scope.go:117] "RemoveContainer" containerID="d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.483569 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5lxs" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.520042 4813 scope.go:117] "RemoveContainer" containerID="a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.538424 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5lxs"] Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.544777 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5lxs"] Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.551984 4813 scope.go:117] "RemoveContainer" containerID="1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.601260 4813 scope.go:117] "RemoveContainer" containerID="d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928" Dec 02 10:52:39 crc kubenswrapper[4813]: E1202 10:52:39.601728 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928\": container with ID starting with d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928 not found: ID does not exist" containerID="d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.601760 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928"} err="failed to get container status \"d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928\": rpc error: code = NotFound desc = could not find container \"d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928\": container with ID starting with d1221a72cddb60c3c74ae5996ec7ea79fe96fd5c036c8814a9372f7f02a96928 not found: ID does not exist" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.601783 4813 scope.go:117] "RemoveContainer" containerID="a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617" Dec 02 10:52:39 crc kubenswrapper[4813]: E1202 10:52:39.603006 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617\": container with ID starting with a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617 not found: ID does not exist" containerID="a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.603093 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617"} err="failed to get container status \"a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617\": rpc error: code = NotFound desc = could not find container \"a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617\": container with ID starting with a992ca1b096582aa77fcbdaa62e788685e07f336e802c2e7ee185435ee332617 not found: ID does not exist" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.603122 4813 scope.go:117] "RemoveContainer" containerID="1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276" Dec 02 10:52:39 crc kubenswrapper[4813]: E1202 10:52:39.603675 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276\": container with ID starting with 1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276 not found: ID does not exist" containerID="1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276" Dec 02 10:52:39 crc kubenswrapper[4813]: I1202 10:52:39.603730 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276"} err="failed to get container status \"1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276\": rpc error: code = NotFound desc = could not find container \"1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276\": container with ID starting with 1c85e84535a6069345965702e8ce5150977ff947a98fdf9f1a48516c1eef1276 not found: ID does not exist" Dec 02 10:52:40 crc kubenswrapper[4813]: I1202 10:52:40.080721 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" path="/var/lib/kubelet/pods/aa14546f-d696-4b0e-bd85-a9368cc2d1bd/volumes" Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.897947 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.913814 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.921759 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.928860 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xdj9g"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.936156 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.943239 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.955887 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.966199 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.976009 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.983392 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ftkfz"] Dec 02 10:52:52 crc kubenswrapper[4813]: I1202 10:52:52.990922 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qrpmm"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:52.999953 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xdj9g"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:53.008999 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8qn22"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:53.016999 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qkdhm"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:53.025415 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jxwj4"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:53.036106 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nrvwh"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:53.044976 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sjq9d"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:53.051328 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rmq2r"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:53.056562 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz"] Dec 02 10:52:53 crc kubenswrapper[4813]: I1202 10:52:53.061629 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v6tfz"] Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.083346 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17013aed-1ecc-4f3f-b160-a0aef30d713c" path="/var/lib/kubelet/pods/17013aed-1ecc-4f3f-b160-a0aef30d713c/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.084753 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192b9cbd-3f7c-4786-8028-60dd72662744" path="/var/lib/kubelet/pods/192b9cbd-3f7c-4786-8028-60dd72662744/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.085965 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d51de72-2e56-41e2-94da-1faf74e0b3c3" path="/var/lib/kubelet/pods/1d51de72-2e56-41e2-94da-1faf74e0b3c3/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.087180 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5ced15-3788-4822-a702-d60d57b5e36f" path="/var/lib/kubelet/pods/1e5ced15-3788-4822-a702-d60d57b5e36f/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.089604 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41db46e0-68d6-435f-a4b8-16a67663eedf" path="/var/lib/kubelet/pods/41db46e0-68d6-435f-a4b8-16a67663eedf/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.090880 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d68626-018a-4ac1-a15f-35e9ad8444f4" path="/var/lib/kubelet/pods/44d68626-018a-4ac1-a15f-35e9ad8444f4/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.092177 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1393a3f-de97-4a49-a26c-c371126a3395" path="/var/lib/kubelet/pods/b1393a3f-de97-4a49-a26c-c371126a3395/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.093967 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baacde6b-2915-4fb8-a6db-793969a48c79" path="/var/lib/kubelet/pods/baacde6b-2915-4fb8-a6db-793969a48c79/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.095303 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07ad40f-936a-40fd-b69e-308239229a25" path="/var/lib/kubelet/pods/f07ad40f-936a-40fd-b69e-308239229a25/volumes" Dec 02 10:52:54 crc kubenswrapper[4813]: I1202 10:52:54.096486 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e81a7d-ce65-4039-8a52-cfc250d66c0b" path="/var/lib/kubelet/pods/f6e81a7d-ce65-4039-8a52-cfc250d66c0b/volumes" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.965433 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc"] Dec 02 10:52:58 crc kubenswrapper[4813]: E1202 10:52:58.966295 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerName="extract-utilities" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.966311 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerName="extract-utilities" Dec 02 10:52:58 crc kubenswrapper[4813]: E1202 10:52:58.966324 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerName="extract-content" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.966330 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerName="extract-content" Dec 02 10:52:58 crc kubenswrapper[4813]: E1202 10:52:58.966348 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerName="registry-server" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.966354 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerName="registry-server" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.966551 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa14546f-d696-4b0e-bd85-a9368cc2d1bd" containerName="registry-server" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.967149 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.970157 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.970241 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.970244 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.970318 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.974414 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:52:58 crc kubenswrapper[4813]: I1202 10:52:58.991519 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc"] Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.106002 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrcjs\" (UniqueName: \"kubernetes.io/projected/a2f71156-b569-4254-8cc5-2e38a5ca5edc-kube-api-access-zrcjs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.106349 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.106494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.106803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.106862 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.211127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrcjs\" (UniqueName: \"kubernetes.io/projected/a2f71156-b569-4254-8cc5-2e38a5ca5edc-kube-api-access-zrcjs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.211264 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.211380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.211526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.211566 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.223397 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.232526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.234849 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.236031 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.238298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrcjs\" (UniqueName: \"kubernetes.io/projected/a2f71156-b569-4254-8cc5-2e38a5ca5edc-kube-api-access-zrcjs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.288340 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:52:59 crc kubenswrapper[4813]: I1202 10:52:59.816470 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc"] Dec 02 10:53:00 crc kubenswrapper[4813]: I1202 10:53:00.677463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" event={"ID":"a2f71156-b569-4254-8cc5-2e38a5ca5edc","Type":"ContainerStarted","Data":"4de6ffb568f1d200f565532ddd2fc09553c8ecd68824762d7e47ed805e3a4f8f"} Dec 02 10:53:00 crc kubenswrapper[4813]: I1202 10:53:00.679521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" event={"ID":"a2f71156-b569-4254-8cc5-2e38a5ca5edc","Type":"ContainerStarted","Data":"27d09f5647fe77ab1f0ac7b3a983d0fc048e669b726dc4d6c4b656305ad09f2d"} Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.273418 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.274239 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.274294 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.275012 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbee9828e642b95594b6a1d54e70cf903aefb8fe393ade18728a5f77245eed79"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.275109 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://cbee9828e642b95594b6a1d54e70cf903aefb8fe393ade18728a5f77245eed79" gracePeriod=600 Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.719719 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="cbee9828e642b95594b6a1d54e70cf903aefb8fe393ade18728a5f77245eed79" exitCode=0 Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.719780 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"cbee9828e642b95594b6a1d54e70cf903aefb8fe393ade18728a5f77245eed79"} Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.720094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a"} Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.720128 4813 scope.go:117] "RemoveContainer" containerID="9b8f2bcc934c569041b9cc16f47fbff82b8f52d683d328d2d2ce58f205fac152" Dec 02 10:53:04 crc kubenswrapper[4813]: I1202 10:53:04.742277 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" podStartSLOduration=6.294269418 podStartE2EDuration="6.742257197s" podCreationTimestamp="2025-12-02 10:52:58 +0000 UTC" firstStartedPulling="2025-12-02 10:52:59.821818923 +0000 UTC m=+2704.016993235" lastFinishedPulling="2025-12-02 10:53:00.269806692 +0000 UTC m=+2704.464981014" observedRunningTime="2025-12-02 10:53:00.697219327 +0000 UTC m=+2704.892393659" watchObservedRunningTime="2025-12-02 10:53:04.742257197 +0000 UTC m=+2708.937431509" Dec 02 10:53:05 crc kubenswrapper[4813]: I1202 10:53:05.112748 4813 scope.go:117] "RemoveContainer" containerID="67c64dac86a433fc0845dd9c036e26a2e303f38fd962a13488a579e4182e4db8" Dec 02 10:53:05 crc kubenswrapper[4813]: I1202 10:53:05.140831 4813 scope.go:117] "RemoveContainer" containerID="013a10a8d810d9330165f55d3fce76dec51f3342f90d1104f1de7c96cfab7491" Dec 02 10:53:05 crc kubenswrapper[4813]: I1202 10:53:05.260880 4813 scope.go:117] "RemoveContainer" containerID="29a18c84a2cc1837c9ae0297501b18069ba9e0406ec818c74c2d7bef5baaefd3" Dec 02 10:53:05 crc kubenswrapper[4813]: I1202 10:53:05.323643 4813 scope.go:117] "RemoveContainer" containerID="bc382d64ecb40b9d3065d462033b7aaab6771df7368ef4c2f3632c71abb141f8" Dec 02 10:53:05 crc kubenswrapper[4813]: I1202 10:53:05.353760 4813 scope.go:117] "RemoveContainer" containerID="84baaadf1576e6c32afc7afac7d36000feb8099bcc83126946aa3c84021680b9" Dec 02 10:53:05 crc kubenswrapper[4813]: I1202 10:53:05.408136 4813 scope.go:117] "RemoveContainer" containerID="b8a5961b015572826b1cf308ad92161b72aa80273dc51a1dd5322ba4229c23fb" Dec 02 10:53:05 crc kubenswrapper[4813]: I1202 10:53:05.461495 4813 scope.go:117] "RemoveContainer" containerID="d616f340caa88c6f9ca96b58c62c12903261a48aaa7dfcf393cc9c936e479f0a" Dec 02 10:53:05 crc kubenswrapper[4813]: I1202 10:53:05.514802 4813 scope.go:117] "RemoveContainer" containerID="e91c37cf4976becdaccf8755c31d331f932fc0906df7b30d78746ebffcf78d66" Dec 02 10:53:12 crc kubenswrapper[4813]: I1202 10:53:12.798629 4813 generic.go:334] "Generic (PLEG): container finished" podID="a2f71156-b569-4254-8cc5-2e38a5ca5edc" containerID="4de6ffb568f1d200f565532ddd2fc09553c8ecd68824762d7e47ed805e3a4f8f" exitCode=0 Dec 02 10:53:12 crc kubenswrapper[4813]: I1202 10:53:12.798742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" event={"ID":"a2f71156-b569-4254-8cc5-2e38a5ca5edc","Type":"ContainerDied","Data":"4de6ffb568f1d200f565532ddd2fc09553c8ecd68824762d7e47ed805e3a4f8f"} Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.265600 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.383878 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-repo-setup-combined-ca-bundle\") pod \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.384300 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-inventory\") pod \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.384383 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ceph\") pod \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.384482 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrcjs\" (UniqueName: \"kubernetes.io/projected/a2f71156-b569-4254-8cc5-2e38a5ca5edc-kube-api-access-zrcjs\") pod \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.384515 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ssh-key\") pod \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\" (UID: \"a2f71156-b569-4254-8cc5-2e38a5ca5edc\") " Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.391329 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f71156-b569-4254-8cc5-2e38a5ca5edc-kube-api-access-zrcjs" (OuterVolumeSpecName: "kube-api-access-zrcjs") pod "a2f71156-b569-4254-8cc5-2e38a5ca5edc" (UID: "a2f71156-b569-4254-8cc5-2e38a5ca5edc"). InnerVolumeSpecName "kube-api-access-zrcjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.391694 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ceph" (OuterVolumeSpecName: "ceph") pod "a2f71156-b569-4254-8cc5-2e38a5ca5edc" (UID: "a2f71156-b569-4254-8cc5-2e38a5ca5edc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.391811 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a2f71156-b569-4254-8cc5-2e38a5ca5edc" (UID: "a2f71156-b569-4254-8cc5-2e38a5ca5edc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.410374 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a2f71156-b569-4254-8cc5-2e38a5ca5edc" (UID: "a2f71156-b569-4254-8cc5-2e38a5ca5edc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.412140 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-inventory" (OuterVolumeSpecName: "inventory") pod "a2f71156-b569-4254-8cc5-2e38a5ca5edc" (UID: "a2f71156-b569-4254-8cc5-2e38a5ca5edc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.486482 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.486512 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.486521 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrcjs\" (UniqueName: \"kubernetes.io/projected/a2f71156-b569-4254-8cc5-2e38a5ca5edc-kube-api-access-zrcjs\") on node \"crc\" DevicePath \"\"" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.486532 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.486540 4813 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f71156-b569-4254-8cc5-2e38a5ca5edc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.821390 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" event={"ID":"a2f71156-b569-4254-8cc5-2e38a5ca5edc","Type":"ContainerDied","Data":"27d09f5647fe77ab1f0ac7b3a983d0fc048e669b726dc4d6c4b656305ad09f2d"} Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.821447 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d09f5647fe77ab1f0ac7b3a983d0fc048e669b726dc4d6c4b656305ad09f2d" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.821460 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.894989 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq"] Dec 02 10:53:14 crc kubenswrapper[4813]: E1202 10:53:14.895880 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f71156-b569-4254-8cc5-2e38a5ca5edc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.895912 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f71156-b569-4254-8cc5-2e38a5ca5edc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.896239 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f71156-b569-4254-8cc5-2e38a5ca5edc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.899629 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.904421 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.904467 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.904617 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.905466 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.905739 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.905732 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq"] Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.996121 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7qj\" (UniqueName: \"kubernetes.io/projected/274340d8-d37f-4d14-a810-be92bd373f3f-kube-api-access-sr7qj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.996181 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.996309 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.996463 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:14 crc kubenswrapper[4813]: I1202 10:53:14.996605 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.097802 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.097866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr7qj\" (UniqueName: \"kubernetes.io/projected/274340d8-d37f-4d14-a810-be92bd373f3f-kube-api-access-sr7qj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.097889 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.097943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.098002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.104295 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.105007 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.105324 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.107108 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.116014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr7qj\" (UniqueName: \"kubernetes.io/projected/274340d8-d37f-4d14-a810-be92bd373f3f-kube-api-access-sr7qj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.219782 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.741796 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq"] Dec 02 10:53:15 crc kubenswrapper[4813]: W1202 10:53:15.751242 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod274340d8_d37f_4d14_a810_be92bd373f3f.slice/crio-a45fb79c704df09a6ae54a3608b75d4508e769fb12df268cdcee52d73209753b WatchSource:0}: Error finding container a45fb79c704df09a6ae54a3608b75d4508e769fb12df268cdcee52d73209753b: Status 404 returned error can't find the container with id a45fb79c704df09a6ae54a3608b75d4508e769fb12df268cdcee52d73209753b Dec 02 10:53:15 crc kubenswrapper[4813]: I1202 10:53:15.830973 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" event={"ID":"274340d8-d37f-4d14-a810-be92bd373f3f","Type":"ContainerStarted","Data":"a45fb79c704df09a6ae54a3608b75d4508e769fb12df268cdcee52d73209753b"} Dec 02 10:53:16 crc kubenswrapper[4813]: I1202 10:53:16.840303 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" event={"ID":"274340d8-d37f-4d14-a810-be92bd373f3f","Type":"ContainerStarted","Data":"a2062efd2a8d59a9350e21dab542ddcc5c0f73c856b4d3518af23821a15f05e1"} Dec 02 10:53:16 crc kubenswrapper[4813]: I1202 10:53:16.856513 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" podStartSLOduration=2.432778198 podStartE2EDuration="2.856490859s" podCreationTimestamp="2025-12-02 10:53:14 +0000 UTC" firstStartedPulling="2025-12-02 10:53:15.752995224 +0000 UTC m=+2719.948169536" lastFinishedPulling="2025-12-02 10:53:16.176707895 +0000 UTC m=+2720.371882197" observedRunningTime="2025-12-02 10:53:16.854506602 +0000 UTC m=+2721.049680904" watchObservedRunningTime="2025-12-02 10:53:16.856490859 +0000 UTC m=+2721.051665171" Dec 02 10:54:05 crc kubenswrapper[4813]: I1202 10:54:05.681576 4813 scope.go:117] "RemoveContainer" containerID="0f0445d9371f3d0f18fcb2de73e3a91361e50bc4a67408955f83a67c266f8ae5" Dec 02 10:54:05 crc kubenswrapper[4813]: I1202 10:54:05.713522 4813 scope.go:117] "RemoveContainer" containerID="1e25d543fddf6c417fa3da569dc503d434569bd2691b84d1cd2f76a0d4cf3528" Dec 02 10:55:04 crc kubenswrapper[4813]: I1202 10:55:04.274313 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:55:04 crc kubenswrapper[4813]: I1202 10:55:04.274979 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:55:04 crc kubenswrapper[4813]: I1202 10:55:04.846995 4813 generic.go:334] "Generic (PLEG): container finished" podID="274340d8-d37f-4d14-a810-be92bd373f3f" containerID="a2062efd2a8d59a9350e21dab542ddcc5c0f73c856b4d3518af23821a15f05e1" exitCode=0 Dec 02 10:55:04 crc kubenswrapper[4813]: I1202 10:55:04.847143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" event={"ID":"274340d8-d37f-4d14-a810-be92bd373f3f","Type":"ContainerDied","Data":"a2062efd2a8d59a9350e21dab542ddcc5c0f73c856b4d3518af23821a15f05e1"} Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.236953 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.349937 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-inventory\") pod \"274340d8-d37f-4d14-a810-be92bd373f3f\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.350045 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ssh-key\") pod \"274340d8-d37f-4d14-a810-be92bd373f3f\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.350228 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr7qj\" (UniqueName: \"kubernetes.io/projected/274340d8-d37f-4d14-a810-be92bd373f3f-kube-api-access-sr7qj\") pod \"274340d8-d37f-4d14-a810-be92bd373f3f\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.350297 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-bootstrap-combined-ca-bundle\") pod \"274340d8-d37f-4d14-a810-be92bd373f3f\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.350384 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ceph\") pod \"274340d8-d37f-4d14-a810-be92bd373f3f\" (UID: \"274340d8-d37f-4d14-a810-be92bd373f3f\") " Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.356484 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274340d8-d37f-4d14-a810-be92bd373f3f-kube-api-access-sr7qj" (OuterVolumeSpecName: "kube-api-access-sr7qj") pod "274340d8-d37f-4d14-a810-be92bd373f3f" (UID: "274340d8-d37f-4d14-a810-be92bd373f3f"). InnerVolumeSpecName "kube-api-access-sr7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.357129 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ceph" (OuterVolumeSpecName: "ceph") pod "274340d8-d37f-4d14-a810-be92bd373f3f" (UID: "274340d8-d37f-4d14-a810-be92bd373f3f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.358111 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "274340d8-d37f-4d14-a810-be92bd373f3f" (UID: "274340d8-d37f-4d14-a810-be92bd373f3f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.379338 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "274340d8-d37f-4d14-a810-be92bd373f3f" (UID: "274340d8-d37f-4d14-a810-be92bd373f3f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.393809 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-inventory" (OuterVolumeSpecName: "inventory") pod "274340d8-d37f-4d14-a810-be92bd373f3f" (UID: "274340d8-d37f-4d14-a810-be92bd373f3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.453065 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.453136 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.453155 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.453173 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr7qj\" (UniqueName: \"kubernetes.io/projected/274340d8-d37f-4d14-a810-be92bd373f3f-kube-api-access-sr7qj\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.453193 4813 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274340d8-d37f-4d14-a810-be92bd373f3f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.869286 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" event={"ID":"274340d8-d37f-4d14-a810-be92bd373f3f","Type":"ContainerDied","Data":"a45fb79c704df09a6ae54a3608b75d4508e769fb12df268cdcee52d73209753b"} Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.869698 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a45fb79c704df09a6ae54a3608b75d4508e769fb12df268cdcee52d73209753b" Dec 02 10:55:06 crc kubenswrapper[4813]: I1202 10:55:06.869353 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.005725 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g"] Dec 02 10:55:07 crc kubenswrapper[4813]: E1202 10:55:07.006390 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274340d8-d37f-4d14-a810-be92bd373f3f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.006424 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="274340d8-d37f-4d14-a810-be92bd373f3f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.006732 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="274340d8-d37f-4d14-a810-be92bd373f3f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.007628 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.010561 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.016053 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.016162 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.016373 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.018675 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.065682 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.065737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.065914 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5nhl\" (UniqueName: \"kubernetes.io/projected/bf066df0-6f94-4514-9d92-30d252aea2f7-kube-api-access-n5nhl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.066225 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.078493 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g"] Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.167896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.167957 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.168008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5nhl\" (UniqueName: \"kubernetes.io/projected/bf066df0-6f94-4514-9d92-30d252aea2f7-kube-api-access-n5nhl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.168062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.172422 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.172520 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.172576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.185352 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5nhl\" (UniqueName: \"kubernetes.io/projected/bf066df0-6f94-4514-9d92-30d252aea2f7-kube-api-access-n5nhl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zn92g\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.335480 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:07 crc kubenswrapper[4813]: I1202 10:55:07.878981 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g"] Dec 02 10:55:08 crc kubenswrapper[4813]: I1202 10:55:08.884944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" event={"ID":"bf066df0-6f94-4514-9d92-30d252aea2f7","Type":"ContainerStarted","Data":"ce7458a5cb6842071d2543b5e66104b37d28a75add0a31f9b0e44033d2bbaa5e"} Dec 02 10:55:08 crc kubenswrapper[4813]: I1202 10:55:08.885302 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" event={"ID":"bf066df0-6f94-4514-9d92-30d252aea2f7","Type":"ContainerStarted","Data":"fdc14763536b1c162522ca18bdc804fe5ca28bf78536ecd908eebb2d8360fd54"} Dec 02 10:55:08 crc kubenswrapper[4813]: I1202 10:55:08.913115 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" podStartSLOduration=2.472479955 podStartE2EDuration="2.913094795s" podCreationTimestamp="2025-12-02 10:55:06 +0000 UTC" firstStartedPulling="2025-12-02 10:55:07.889263533 +0000 UTC m=+2832.084437835" lastFinishedPulling="2025-12-02 10:55:08.329878333 +0000 UTC m=+2832.525052675" observedRunningTime="2025-12-02 10:55:08.905343215 +0000 UTC m=+2833.100517537" watchObservedRunningTime="2025-12-02 10:55:08.913094795 +0000 UTC m=+2833.108269107" Dec 02 10:55:34 crc kubenswrapper[4813]: I1202 10:55:34.274239 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:55:34 crc kubenswrapper[4813]: I1202 10:55:34.274834 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:55:35 crc kubenswrapper[4813]: I1202 10:55:35.137981 4813 generic.go:334] "Generic (PLEG): container finished" podID="bf066df0-6f94-4514-9d92-30d252aea2f7" containerID="ce7458a5cb6842071d2543b5e66104b37d28a75add0a31f9b0e44033d2bbaa5e" exitCode=0 Dec 02 10:55:35 crc kubenswrapper[4813]: I1202 10:55:35.138025 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" event={"ID":"bf066df0-6f94-4514-9d92-30d252aea2f7","Type":"ContainerDied","Data":"ce7458a5cb6842071d2543b5e66104b37d28a75add0a31f9b0e44033d2bbaa5e"} Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.558005 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.704536 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ssh-key\") pod \"bf066df0-6f94-4514-9d92-30d252aea2f7\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.704586 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5nhl\" (UniqueName: \"kubernetes.io/projected/bf066df0-6f94-4514-9d92-30d252aea2f7-kube-api-access-n5nhl\") pod \"bf066df0-6f94-4514-9d92-30d252aea2f7\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.704612 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ceph\") pod \"bf066df0-6f94-4514-9d92-30d252aea2f7\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.704759 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-inventory\") pod \"bf066df0-6f94-4514-9d92-30d252aea2f7\" (UID: \"bf066df0-6f94-4514-9d92-30d252aea2f7\") " Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.710523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ceph" (OuterVolumeSpecName: "ceph") pod "bf066df0-6f94-4514-9d92-30d252aea2f7" (UID: "bf066df0-6f94-4514-9d92-30d252aea2f7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.711278 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf066df0-6f94-4514-9d92-30d252aea2f7-kube-api-access-n5nhl" (OuterVolumeSpecName: "kube-api-access-n5nhl") pod "bf066df0-6f94-4514-9d92-30d252aea2f7" (UID: "bf066df0-6f94-4514-9d92-30d252aea2f7"). InnerVolumeSpecName "kube-api-access-n5nhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.732373 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-inventory" (OuterVolumeSpecName: "inventory") pod "bf066df0-6f94-4514-9d92-30d252aea2f7" (UID: "bf066df0-6f94-4514-9d92-30d252aea2f7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.732624 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bf066df0-6f94-4514-9d92-30d252aea2f7" (UID: "bf066df0-6f94-4514-9d92-30d252aea2f7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.809680 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.809756 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5nhl\" (UniqueName: \"kubernetes.io/projected/bf066df0-6f94-4514-9d92-30d252aea2f7-kube-api-access-n5nhl\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.809780 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:36 crc kubenswrapper[4813]: I1202 10:55:36.809806 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf066df0-6f94-4514-9d92-30d252aea2f7-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.159553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" event={"ID":"bf066df0-6f94-4514-9d92-30d252aea2f7","Type":"ContainerDied","Data":"fdc14763536b1c162522ca18bdc804fe5ca28bf78536ecd908eebb2d8360fd54"} Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.160626 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdc14763536b1c162522ca18bdc804fe5ca28bf78536ecd908eebb2d8360fd54" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.159680 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zn92g" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.281874 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c"] Dec 02 10:55:37 crc kubenswrapper[4813]: E1202 10:55:37.282546 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf066df0-6f94-4514-9d92-30d252aea2f7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.282578 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf066df0-6f94-4514-9d92-30d252aea2f7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.284016 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf066df0-6f94-4514-9d92-30d252aea2f7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.285218 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.289398 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.290723 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.298256 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c"] Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.298530 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.298573 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.298804 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.319183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.319228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtv65\" (UniqueName: \"kubernetes.io/projected/818f8423-ecb9-4ec4-a4af-a6d4e9979032-kube-api-access-qtv65\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.319300 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.319363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.421702 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.421762 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtv65\" (UniqueName: \"kubernetes.io/projected/818f8423-ecb9-4ec4-a4af-a6d4e9979032-kube-api-access-qtv65\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.421825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.421885 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.427380 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.428229 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.428994 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.437716 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtv65\" (UniqueName: \"kubernetes.io/projected/818f8423-ecb9-4ec4-a4af-a6d4e9979032-kube-api-access-qtv65\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l292c\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:37 crc kubenswrapper[4813]: I1202 10:55:37.622264 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:38 crc kubenswrapper[4813]: I1202 10:55:38.235194 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c"] Dec 02 10:55:39 crc kubenswrapper[4813]: I1202 10:55:39.182999 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" event={"ID":"818f8423-ecb9-4ec4-a4af-a6d4e9979032","Type":"ContainerStarted","Data":"26d1f5002bc3fb4654743956f30be33fc224d20e07eca280de0e3297deb16cf6"} Dec 02 10:55:39 crc kubenswrapper[4813]: I1202 10:55:39.183593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" event={"ID":"818f8423-ecb9-4ec4-a4af-a6d4e9979032","Type":"ContainerStarted","Data":"4ab5c669ce5e6583d7ae4d3ea5a3914f3916e65039db9646acdc14e36b647546"} Dec 02 10:55:39 crc kubenswrapper[4813]: I1202 10:55:39.201780 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" podStartSLOduration=1.7316828549999999 podStartE2EDuration="2.201762873s" podCreationTimestamp="2025-12-02 10:55:37 +0000 UTC" firstStartedPulling="2025-12-02 10:55:38.235452724 +0000 UTC m=+2862.430627026" lastFinishedPulling="2025-12-02 10:55:38.705532732 +0000 UTC m=+2862.900707044" observedRunningTime="2025-12-02 10:55:39.201401923 +0000 UTC m=+2863.396576255" watchObservedRunningTime="2025-12-02 10:55:39.201762873 +0000 UTC m=+2863.396937185" Dec 02 10:55:44 crc kubenswrapper[4813]: I1202 10:55:44.226249 4813 generic.go:334] "Generic (PLEG): container finished" podID="818f8423-ecb9-4ec4-a4af-a6d4e9979032" containerID="26d1f5002bc3fb4654743956f30be33fc224d20e07eca280de0e3297deb16cf6" exitCode=0 Dec 02 10:55:44 crc kubenswrapper[4813]: I1202 10:55:44.226348 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" event={"ID":"818f8423-ecb9-4ec4-a4af-a6d4e9979032","Type":"ContainerDied","Data":"26d1f5002bc3fb4654743956f30be33fc224d20e07eca280de0e3297deb16cf6"} Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.763024 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.798665 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ceph\") pod \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.798881 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ssh-key\") pod \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.798935 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-inventory\") pod \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.799035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtv65\" (UniqueName: \"kubernetes.io/projected/818f8423-ecb9-4ec4-a4af-a6d4e9979032-kube-api-access-qtv65\") pod \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\" (UID: \"818f8423-ecb9-4ec4-a4af-a6d4e9979032\") " Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.803978 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818f8423-ecb9-4ec4-a4af-a6d4e9979032-kube-api-access-qtv65" (OuterVolumeSpecName: "kube-api-access-qtv65") pod "818f8423-ecb9-4ec4-a4af-a6d4e9979032" (UID: "818f8423-ecb9-4ec4-a4af-a6d4e9979032"). InnerVolumeSpecName "kube-api-access-qtv65". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.804863 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ceph" (OuterVolumeSpecName: "ceph") pod "818f8423-ecb9-4ec4-a4af-a6d4e9979032" (UID: "818f8423-ecb9-4ec4-a4af-a6d4e9979032"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.824309 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-inventory" (OuterVolumeSpecName: "inventory") pod "818f8423-ecb9-4ec4-a4af-a6d4e9979032" (UID: "818f8423-ecb9-4ec4-a4af-a6d4e9979032"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.824531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "818f8423-ecb9-4ec4-a4af-a6d4e9979032" (UID: "818f8423-ecb9-4ec4-a4af-a6d4e9979032"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.901349 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.901377 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtv65\" (UniqueName: \"kubernetes.io/projected/818f8423-ecb9-4ec4-a4af-a6d4e9979032-kube-api-access-qtv65\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.901388 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:45 crc kubenswrapper[4813]: I1202 10:55:45.901397 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/818f8423-ecb9-4ec4-a4af-a6d4e9979032-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.250488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" event={"ID":"818f8423-ecb9-4ec4-a4af-a6d4e9979032","Type":"ContainerDied","Data":"4ab5c669ce5e6583d7ae4d3ea5a3914f3916e65039db9646acdc14e36b647546"} Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.250541 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ab5c669ce5e6583d7ae4d3ea5a3914f3916e65039db9646acdc14e36b647546" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.250651 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l292c" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.351396 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq"] Dec 02 10:55:46 crc kubenswrapper[4813]: E1202 10:55:46.351914 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818f8423-ecb9-4ec4-a4af-a6d4e9979032" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.351938 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="818f8423-ecb9-4ec4-a4af-a6d4e9979032" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.352160 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="818f8423-ecb9-4ec4-a4af-a6d4e9979032" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.352862 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.355324 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.355849 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.356543 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.358890 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.359626 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.364757 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq"] Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.412135 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmskq\" (UniqueName: \"kubernetes.io/projected/8b306c3a-786a-44f8-83be-75641ead26f3-kube-api-access-bmskq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.412558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.412931 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.413015 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.515522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.515999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.516264 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.516450 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmskq\" (UniqueName: \"kubernetes.io/projected/8b306c3a-786a-44f8-83be-75641ead26f3-kube-api-access-bmskq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.523146 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.526715 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.526996 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.537651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmskq\" (UniqueName: \"kubernetes.io/projected/8b306c3a-786a-44f8-83be-75641ead26f3-kube-api-access-bmskq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g5ppq\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:46 crc kubenswrapper[4813]: I1202 10:55:46.672605 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:55:47 crc kubenswrapper[4813]: I1202 10:55:47.231504 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq"] Dec 02 10:55:47 crc kubenswrapper[4813]: I1202 10:55:47.260059 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" event={"ID":"8b306c3a-786a-44f8-83be-75641ead26f3","Type":"ContainerStarted","Data":"9c8683534660846792aa8e05cb601cbfb3707c797ebc9dde5cd1bdb680143f2f"} Dec 02 10:55:48 crc kubenswrapper[4813]: I1202 10:55:48.271423 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" event={"ID":"8b306c3a-786a-44f8-83be-75641ead26f3","Type":"ContainerStarted","Data":"f109f647368c2ea5c597e63b152cdf455eab7507f9574f89d9dcbe75551c3666"} Dec 02 10:55:48 crc kubenswrapper[4813]: I1202 10:55:48.298448 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" podStartSLOduration=1.659902432 podStartE2EDuration="2.298402383s" podCreationTimestamp="2025-12-02 10:55:46 +0000 UTC" firstStartedPulling="2025-12-02 10:55:47.23620353 +0000 UTC m=+2871.431377832" lastFinishedPulling="2025-12-02 10:55:47.874703471 +0000 UTC m=+2872.069877783" observedRunningTime="2025-12-02 10:55:48.289187471 +0000 UTC m=+2872.484361813" watchObservedRunningTime="2025-12-02 10:55:48.298402383 +0000 UTC m=+2872.493576735" Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.273614 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.274312 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.274397 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.275654 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.275787 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" gracePeriod=600 Dec 02 10:56:04 crc kubenswrapper[4813]: E1202 10:56:04.404446 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.414452 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" exitCode=0 Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.414491 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a"} Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.414519 4813 scope.go:117] "RemoveContainer" containerID="cbee9828e642b95594b6a1d54e70cf903aefb8fe393ade18728a5f77245eed79" Dec 02 10:56:04 crc kubenswrapper[4813]: I1202 10:56:04.415116 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:56:04 crc kubenswrapper[4813]: E1202 10:56:04.416030 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:56:19 crc kubenswrapper[4813]: I1202 10:56:19.068191 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:56:19 crc kubenswrapper[4813]: E1202 10:56:19.068906 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:56:27 crc kubenswrapper[4813]: I1202 10:56:27.633499 4813 generic.go:334] "Generic (PLEG): container finished" podID="8b306c3a-786a-44f8-83be-75641ead26f3" containerID="f109f647368c2ea5c597e63b152cdf455eab7507f9574f89d9dcbe75551c3666" exitCode=0 Dec 02 10:56:27 crc kubenswrapper[4813]: I1202 10:56:27.633712 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" event={"ID":"8b306c3a-786a-44f8-83be-75641ead26f3","Type":"ContainerDied","Data":"f109f647368c2ea5c597e63b152cdf455eab7507f9574f89d9dcbe75551c3666"} Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.059456 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.215328 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmskq\" (UniqueName: \"kubernetes.io/projected/8b306c3a-786a-44f8-83be-75641ead26f3-kube-api-access-bmskq\") pod \"8b306c3a-786a-44f8-83be-75641ead26f3\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.215399 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-inventory\") pod \"8b306c3a-786a-44f8-83be-75641ead26f3\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.215451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ssh-key\") pod \"8b306c3a-786a-44f8-83be-75641ead26f3\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.215494 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ceph\") pod \"8b306c3a-786a-44f8-83be-75641ead26f3\" (UID: \"8b306c3a-786a-44f8-83be-75641ead26f3\") " Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.223333 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b306c3a-786a-44f8-83be-75641ead26f3-kube-api-access-bmskq" (OuterVolumeSpecName: "kube-api-access-bmskq") pod "8b306c3a-786a-44f8-83be-75641ead26f3" (UID: "8b306c3a-786a-44f8-83be-75641ead26f3"). InnerVolumeSpecName "kube-api-access-bmskq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.242934 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ceph" (OuterVolumeSpecName: "ceph") pod "8b306c3a-786a-44f8-83be-75641ead26f3" (UID: "8b306c3a-786a-44f8-83be-75641ead26f3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.250726 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-inventory" (OuterVolumeSpecName: "inventory") pod "8b306c3a-786a-44f8-83be-75641ead26f3" (UID: "8b306c3a-786a-44f8-83be-75641ead26f3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.251996 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8b306c3a-786a-44f8-83be-75641ead26f3" (UID: "8b306c3a-786a-44f8-83be-75641ead26f3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.318251 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.318283 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.318292 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b306c3a-786a-44f8-83be-75641ead26f3-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.318303 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmskq\" (UniqueName: \"kubernetes.io/projected/8b306c3a-786a-44f8-83be-75641ead26f3-kube-api-access-bmskq\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.659005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" event={"ID":"8b306c3a-786a-44f8-83be-75641ead26f3","Type":"ContainerDied","Data":"9c8683534660846792aa8e05cb601cbfb3707c797ebc9dde5cd1bdb680143f2f"} Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.659050 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c8683534660846792aa8e05cb601cbfb3707c797ebc9dde5cd1bdb680143f2f" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.659150 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.741897 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79"] Dec 02 10:56:29 crc kubenswrapper[4813]: E1202 10:56:29.742351 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b306c3a-786a-44f8-83be-75641ead26f3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.742378 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b306c3a-786a-44f8-83be-75641ead26f3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.742620 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b306c3a-786a-44f8-83be-75641ead26f3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.743426 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.745608 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.747535 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.747724 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.747743 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.748147 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.750498 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79"] Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.828289 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrcl\" (UniqueName: \"kubernetes.io/projected/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-kube-api-access-pmrcl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.828412 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.828469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.828583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.930631 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.930701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.930782 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.930819 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmrcl\" (UniqueName: \"kubernetes.io/projected/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-kube-api-access-pmrcl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.936227 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.936307 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.946291 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:29 crc kubenswrapper[4813]: I1202 10:56:29.969041 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmrcl\" (UniqueName: \"kubernetes.io/projected/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-kube-api-access-pmrcl\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:30 crc kubenswrapper[4813]: I1202 10:56:30.556554 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:30 crc kubenswrapper[4813]: I1202 10:56:30.883988 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79"] Dec 02 10:56:30 crc kubenswrapper[4813]: I1202 10:56:30.891326 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 10:56:31 crc kubenswrapper[4813]: I1202 10:56:31.068841 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:56:31 crc kubenswrapper[4813]: E1202 10:56:31.069342 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:56:31 crc kubenswrapper[4813]: I1202 10:56:31.682553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" event={"ID":"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7","Type":"ContainerStarted","Data":"c30ef652bc1738b04edb55c25902c896a4fe14573391f985f4c38c9f67dffcc1"} Dec 02 10:56:31 crc kubenswrapper[4813]: I1202 10:56:31.682834 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" event={"ID":"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7","Type":"ContainerStarted","Data":"16795269bd930dad9ef387b5873b9a87326443ac390c3461bd34835a77a086d1"} Dec 02 10:56:31 crc kubenswrapper[4813]: I1202 10:56:31.710905 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" podStartSLOduration=2.239023356 podStartE2EDuration="2.710879495s" podCreationTimestamp="2025-12-02 10:56:29 +0000 UTC" firstStartedPulling="2025-12-02 10:56:30.890986703 +0000 UTC m=+2915.086161015" lastFinishedPulling="2025-12-02 10:56:31.362842852 +0000 UTC m=+2915.558017154" observedRunningTime="2025-12-02 10:56:31.698161734 +0000 UTC m=+2915.893336046" watchObservedRunningTime="2025-12-02 10:56:31.710879495 +0000 UTC m=+2915.906053827" Dec 02 10:56:35 crc kubenswrapper[4813]: I1202 10:56:35.719977 4813 generic.go:334] "Generic (PLEG): container finished" podID="2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7" containerID="c30ef652bc1738b04edb55c25902c896a4fe14573391f985f4c38c9f67dffcc1" exitCode=0 Dec 02 10:56:35 crc kubenswrapper[4813]: I1202 10:56:35.720159 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" event={"ID":"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7","Type":"ContainerDied","Data":"c30ef652bc1738b04edb55c25902c896a4fe14573391f985f4c38c9f67dffcc1"} Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.245743 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.395683 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-inventory\") pod \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.395748 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ceph\") pod \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.395911 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmrcl\" (UniqueName: \"kubernetes.io/projected/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-kube-api-access-pmrcl\") pod \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.395975 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ssh-key\") pod \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\" (UID: \"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7\") " Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.401186 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-kube-api-access-pmrcl" (OuterVolumeSpecName: "kube-api-access-pmrcl") pod "2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7" (UID: "2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7"). InnerVolumeSpecName "kube-api-access-pmrcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.401547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ceph" (OuterVolumeSpecName: "ceph") pod "2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7" (UID: "2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.422318 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7" (UID: "2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.437025 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-inventory" (OuterVolumeSpecName: "inventory") pod "2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7" (UID: "2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.498384 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmrcl\" (UniqueName: \"kubernetes.io/projected/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-kube-api-access-pmrcl\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.498420 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.498461 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.498473 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.746551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" event={"ID":"2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7","Type":"ContainerDied","Data":"16795269bd930dad9ef387b5873b9a87326443ac390c3461bd34835a77a086d1"} Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.746617 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16795269bd930dad9ef387b5873b9a87326443ac390c3461bd34835a77a086d1" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.746712 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.891688 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249"] Dec 02 10:56:37 crc kubenswrapper[4813]: E1202 10:56:37.892107 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.892125 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.892285 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.892843 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.901348 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.901352 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.901470 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.901615 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.901750 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:56:37 crc kubenswrapper[4813]: I1202 10:56:37.922118 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249"] Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.013065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.013411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.013454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qj4x\" (UniqueName: \"kubernetes.io/projected/6c99184e-d396-4734-985d-0f4312e5f82b-kube-api-access-5qj4x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.013543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.115174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.115252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.115294 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qj4x\" (UniqueName: \"kubernetes.io/projected/6c99184e-d396-4734-985d-0f4312e5f82b-kube-api-access-5qj4x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.115388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.119607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.120869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.121263 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.131692 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qj4x\" (UniqueName: \"kubernetes.io/projected/6c99184e-d396-4734-985d-0f4312e5f82b-kube-api-access-5qj4x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rg249\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.224592 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:56:38 crc kubenswrapper[4813]: I1202 10:56:38.776123 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249"] Dec 02 10:56:38 crc kubenswrapper[4813]: W1202 10:56:38.784577 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c99184e_d396_4734_985d_0f4312e5f82b.slice/crio-6a4b02e10e1a346e94851bac7b8da3c3f7f2fdc68b6a4a6edc26f9bebd76addb WatchSource:0}: Error finding container 6a4b02e10e1a346e94851bac7b8da3c3f7f2fdc68b6a4a6edc26f9bebd76addb: Status 404 returned error can't find the container with id 6a4b02e10e1a346e94851bac7b8da3c3f7f2fdc68b6a4a6edc26f9bebd76addb Dec 02 10:56:39 crc kubenswrapper[4813]: I1202 10:56:39.764144 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" event={"ID":"6c99184e-d396-4734-985d-0f4312e5f82b","Type":"ContainerStarted","Data":"0735344ce05320aa48ead33b03c0b6218616a694da38ebba327e33a2223c6c71"} Dec 02 10:56:39 crc kubenswrapper[4813]: I1202 10:56:39.765006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" event={"ID":"6c99184e-d396-4734-985d-0f4312e5f82b","Type":"ContainerStarted","Data":"6a4b02e10e1a346e94851bac7b8da3c3f7f2fdc68b6a4a6edc26f9bebd76addb"} Dec 02 10:56:39 crc kubenswrapper[4813]: I1202 10:56:39.795375 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" podStartSLOduration=2.284422664 podStartE2EDuration="2.795350052s" podCreationTimestamp="2025-12-02 10:56:37 +0000 UTC" firstStartedPulling="2025-12-02 10:56:38.787783061 +0000 UTC m=+2922.982957383" lastFinishedPulling="2025-12-02 10:56:39.298710449 +0000 UTC m=+2923.493884771" observedRunningTime="2025-12-02 10:56:39.781468028 +0000 UTC m=+2923.976642370" watchObservedRunningTime="2025-12-02 10:56:39.795350052 +0000 UTC m=+2923.990524364" Dec 02 10:56:44 crc kubenswrapper[4813]: I1202 10:56:44.069189 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:56:44 crc kubenswrapper[4813]: E1202 10:56:44.070213 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.090909 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:56:56 crc kubenswrapper[4813]: E1202 10:56:56.091580 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.707327 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b6cf8"] Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.709661 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.726620 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6cf8"] Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.860991 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-utilities\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.861041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-catalog-content\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.861141 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wzqs\" (UniqueName: \"kubernetes.io/projected/7f77bef6-b68b-4e42-a14c-b6dc01723d88-kube-api-access-6wzqs\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.963120 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wzqs\" (UniqueName: \"kubernetes.io/projected/7f77bef6-b68b-4e42-a14c-b6dc01723d88-kube-api-access-6wzqs\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.963346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-utilities\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.963380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-catalog-content\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.963901 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-utilities\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.963976 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-catalog-content\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:56 crc kubenswrapper[4813]: I1202 10:56:56.991172 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wzqs\" (UniqueName: \"kubernetes.io/projected/7f77bef6-b68b-4e42-a14c-b6dc01723d88-kube-api-access-6wzqs\") pod \"redhat-operators-b6cf8\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:57 crc kubenswrapper[4813]: I1202 10:56:57.050157 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:56:57 crc kubenswrapper[4813]: I1202 10:56:57.539976 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6cf8"] Dec 02 10:56:57 crc kubenswrapper[4813]: W1202 10:56:57.542420 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f77bef6_b68b_4e42_a14c_b6dc01723d88.slice/crio-5a7982d36473bba0ffcba597a4b6962cb4d3800c6222bc1ea4daf40c5c68fad6 WatchSource:0}: Error finding container 5a7982d36473bba0ffcba597a4b6962cb4d3800c6222bc1ea4daf40c5c68fad6: Status 404 returned error can't find the container with id 5a7982d36473bba0ffcba597a4b6962cb4d3800c6222bc1ea4daf40c5c68fad6 Dec 02 10:56:57 crc kubenswrapper[4813]: I1202 10:56:57.915452 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerID="09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6" exitCode=0 Dec 02 10:56:57 crc kubenswrapper[4813]: I1202 10:56:57.915495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6cf8" event={"ID":"7f77bef6-b68b-4e42-a14c-b6dc01723d88","Type":"ContainerDied","Data":"09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6"} Dec 02 10:56:57 crc kubenswrapper[4813]: I1202 10:56:57.915994 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6cf8" event={"ID":"7f77bef6-b68b-4e42-a14c-b6dc01723d88","Type":"ContainerStarted","Data":"5a7982d36473bba0ffcba597a4b6962cb4d3800c6222bc1ea4daf40c5c68fad6"} Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.297062 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jmcs9"] Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.298687 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.308373 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jmcs9"] Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.487548 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-utilities\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.487902 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-catalog-content\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.487965 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6bp\" (UniqueName: \"kubernetes.io/projected/aae6e478-d365-475b-9ecb-58b52b16f788-kube-api-access-fx6bp\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.588933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-catalog-content\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.589003 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6bp\" (UniqueName: \"kubernetes.io/projected/aae6e478-d365-475b-9ecb-58b52b16f788-kube-api-access-fx6bp\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.589093 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-utilities\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.589480 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-catalog-content\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.589541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-utilities\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.612010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6bp\" (UniqueName: \"kubernetes.io/projected/aae6e478-d365-475b-9ecb-58b52b16f788-kube-api-access-fx6bp\") pod \"certified-operators-jmcs9\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.615497 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:56:58 crc kubenswrapper[4813]: I1202 10:56:58.937371 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6cf8" event={"ID":"7f77bef6-b68b-4e42-a14c-b6dc01723d88","Type":"ContainerStarted","Data":"b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d"} Dec 02 10:56:59 crc kubenswrapper[4813]: W1202 10:56:59.262482 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae6e478_d365_475b_9ecb_58b52b16f788.slice/crio-f3cf11cb5a1aa1f057a0bceafd991207d5654a83bfdee30189413a62f3c93c61 WatchSource:0}: Error finding container f3cf11cb5a1aa1f057a0bceafd991207d5654a83bfdee30189413a62f3c93c61: Status 404 returned error can't find the container with id f3cf11cb5a1aa1f057a0bceafd991207d5654a83bfdee30189413a62f3c93c61 Dec 02 10:56:59 crc kubenswrapper[4813]: I1202 10:56:59.262678 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jmcs9"] Dec 02 10:56:59 crc kubenswrapper[4813]: I1202 10:56:59.951333 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerID="b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d" exitCode=0 Dec 02 10:56:59 crc kubenswrapper[4813]: I1202 10:56:59.951415 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6cf8" event={"ID":"7f77bef6-b68b-4e42-a14c-b6dc01723d88","Type":"ContainerDied","Data":"b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d"} Dec 02 10:56:59 crc kubenswrapper[4813]: I1202 10:56:59.953573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmcs9" event={"ID":"aae6e478-d365-475b-9ecb-58b52b16f788","Type":"ContainerStarted","Data":"6bade3c7109cf17be2319cef5fa661f089847ba0958fd71748349cab54fdf9ab"} Dec 02 10:56:59 crc kubenswrapper[4813]: I1202 10:56:59.953634 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmcs9" event={"ID":"aae6e478-d365-475b-9ecb-58b52b16f788","Type":"ContainerStarted","Data":"f3cf11cb5a1aa1f057a0bceafd991207d5654a83bfdee30189413a62f3c93c61"} Dec 02 10:57:00 crc kubenswrapper[4813]: I1202 10:57:00.534890 4813 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8b306c3a-786a-44f8-83be-75641ead26f3"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8b306c3a-786a-44f8-83be-75641ead26f3] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8b306c3a_786a_44f8_83be_75641ead26f3.slice" Dec 02 10:57:00 crc kubenswrapper[4813]: E1202 10:57:00.534960 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8b306c3a-786a-44f8-83be-75641ead26f3] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8b306c3a-786a-44f8-83be-75641ead26f3] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8b306c3a_786a_44f8_83be_75641ead26f3.slice" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" podUID="8b306c3a-786a-44f8-83be-75641ead26f3" Dec 02 10:57:00 crc kubenswrapper[4813]: I1202 10:57:00.961341 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g5ppq" Dec 02 10:57:01 crc kubenswrapper[4813]: I1202 10:57:01.973501 4813 generic.go:334] "Generic (PLEG): container finished" podID="aae6e478-d365-475b-9ecb-58b52b16f788" containerID="6bade3c7109cf17be2319cef5fa661f089847ba0958fd71748349cab54fdf9ab" exitCode=0 Dec 02 10:57:01 crc kubenswrapper[4813]: I1202 10:57:01.973598 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmcs9" event={"ID":"aae6e478-d365-475b-9ecb-58b52b16f788","Type":"ContainerDied","Data":"6bade3c7109cf17be2319cef5fa661f089847ba0958fd71748349cab54fdf9ab"} Dec 02 10:57:01 crc kubenswrapper[4813]: I1202 10:57:01.977132 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6cf8" event={"ID":"7f77bef6-b68b-4e42-a14c-b6dc01723d88","Type":"ContainerStarted","Data":"44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5"} Dec 02 10:57:02 crc kubenswrapper[4813]: I1202 10:57:02.022767 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b6cf8" podStartSLOduration=3.036071272 podStartE2EDuration="6.022747072s" podCreationTimestamp="2025-12-02 10:56:56 +0000 UTC" firstStartedPulling="2025-12-02 10:56:57.917103748 +0000 UTC m=+2942.112278050" lastFinishedPulling="2025-12-02 10:57:00.903779548 +0000 UTC m=+2945.098953850" observedRunningTime="2025-12-02 10:57:02.015405334 +0000 UTC m=+2946.210579636" watchObservedRunningTime="2025-12-02 10:57:02.022747072 +0000 UTC m=+2946.217921374" Dec 02 10:57:04 crc kubenswrapper[4813]: I1202 10:57:04.035027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmcs9" event={"ID":"aae6e478-d365-475b-9ecb-58b52b16f788","Type":"ContainerStarted","Data":"106c7e5bd2aa4f0db1a60b6382010dac3b67a9428faabf8fbb9a3668f61abbea"} Dec 02 10:57:05 crc kubenswrapper[4813]: I1202 10:57:05.047289 4813 generic.go:334] "Generic (PLEG): container finished" podID="aae6e478-d365-475b-9ecb-58b52b16f788" containerID="106c7e5bd2aa4f0db1a60b6382010dac3b67a9428faabf8fbb9a3668f61abbea" exitCode=0 Dec 02 10:57:05 crc kubenswrapper[4813]: I1202 10:57:05.047546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmcs9" event={"ID":"aae6e478-d365-475b-9ecb-58b52b16f788","Type":"ContainerDied","Data":"106c7e5bd2aa4f0db1a60b6382010dac3b67a9428faabf8fbb9a3668f61abbea"} Dec 02 10:57:06 crc kubenswrapper[4813]: I1202 10:57:06.061841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmcs9" event={"ID":"aae6e478-d365-475b-9ecb-58b52b16f788","Type":"ContainerStarted","Data":"db274015a70a4c7074ab48aae22507b065fea811f95c585a8e543687679b66ba"} Dec 02 10:57:06 crc kubenswrapper[4813]: I1202 10:57:06.093461 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jmcs9" podStartSLOduration=4.368703226 podStartE2EDuration="8.093443484s" podCreationTimestamp="2025-12-02 10:56:58 +0000 UTC" firstStartedPulling="2025-12-02 10:57:01.978752243 +0000 UTC m=+2946.173926545" lastFinishedPulling="2025-12-02 10:57:05.703492481 +0000 UTC m=+2949.898666803" observedRunningTime="2025-12-02 10:57:06.081061592 +0000 UTC m=+2950.276235914" watchObservedRunningTime="2025-12-02 10:57:06.093443484 +0000 UTC m=+2950.288617806" Dec 02 10:57:07 crc kubenswrapper[4813]: I1202 10:57:07.051259 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:57:07 crc kubenswrapper[4813]: I1202 10:57:07.051308 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:57:08 crc kubenswrapper[4813]: I1202 10:57:08.099087 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b6cf8" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="registry-server" probeResult="failure" output=< Dec 02 10:57:08 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 10:57:08 crc kubenswrapper[4813]: > Dec 02 10:57:08 crc kubenswrapper[4813]: I1202 10:57:08.616130 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:57:08 crc kubenswrapper[4813]: I1202 10:57:08.616227 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:57:09 crc kubenswrapper[4813]: I1202 10:57:09.662436 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jmcs9" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="registry-server" probeResult="failure" output=< Dec 02 10:57:09 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 10:57:09 crc kubenswrapper[4813]: > Dec 02 10:57:10 crc kubenswrapper[4813]: I1202 10:57:10.069171 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:57:10 crc kubenswrapper[4813]: E1202 10:57:10.069553 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:57:17 crc kubenswrapper[4813]: I1202 10:57:17.100670 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:57:17 crc kubenswrapper[4813]: I1202 10:57:17.155778 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:57:17 crc kubenswrapper[4813]: I1202 10:57:17.340588 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6cf8"] Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.179763 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b6cf8" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="registry-server" containerID="cri-o://44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5" gracePeriod=2 Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.612537 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.679886 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.705557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wzqs\" (UniqueName: \"kubernetes.io/projected/7f77bef6-b68b-4e42-a14c-b6dc01723d88-kube-api-access-6wzqs\") pod \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.705766 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-catalog-content\") pod \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.705802 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-utilities\") pod \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\" (UID: \"7f77bef6-b68b-4e42-a14c-b6dc01723d88\") " Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.706855 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-utilities" (OuterVolumeSpecName: "utilities") pod "7f77bef6-b68b-4e42-a14c-b6dc01723d88" (UID: "7f77bef6-b68b-4e42-a14c-b6dc01723d88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.711592 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f77bef6-b68b-4e42-a14c-b6dc01723d88-kube-api-access-6wzqs" (OuterVolumeSpecName: "kube-api-access-6wzqs") pod "7f77bef6-b68b-4e42-a14c-b6dc01723d88" (UID: "7f77bef6-b68b-4e42-a14c-b6dc01723d88"). InnerVolumeSpecName "kube-api-access-6wzqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.726335 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.808416 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wzqs\" (UniqueName: \"kubernetes.io/projected/7f77bef6-b68b-4e42-a14c-b6dc01723d88-kube-api-access-6wzqs\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.808445 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.820458 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f77bef6-b68b-4e42-a14c-b6dc01723d88" (UID: "7f77bef6-b68b-4e42-a14c-b6dc01723d88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:57:18 crc kubenswrapper[4813]: I1202 10:57:18.910914 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f77bef6-b68b-4e42-a14c-b6dc01723d88-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.191663 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerID="44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5" exitCode=0 Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.191732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6cf8" event={"ID":"7f77bef6-b68b-4e42-a14c-b6dc01723d88","Type":"ContainerDied","Data":"44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5"} Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.191756 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6cf8" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.191801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6cf8" event={"ID":"7f77bef6-b68b-4e42-a14c-b6dc01723d88","Type":"ContainerDied","Data":"5a7982d36473bba0ffcba597a4b6962cb4d3800c6222bc1ea4daf40c5c68fad6"} Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.191825 4813 scope.go:117] "RemoveContainer" containerID="44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.241114 4813 scope.go:117] "RemoveContainer" containerID="b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.252842 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6cf8"] Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.261796 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b6cf8"] Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.264006 4813 scope.go:117] "RemoveContainer" containerID="09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.304046 4813 scope.go:117] "RemoveContainer" containerID="44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5" Dec 02 10:57:19 crc kubenswrapper[4813]: E1202 10:57:19.304477 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5\": container with ID starting with 44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5 not found: ID does not exist" containerID="44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.304518 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5"} err="failed to get container status \"44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5\": rpc error: code = NotFound desc = could not find container \"44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5\": container with ID starting with 44886f92999df21912a1fcdf40285889f1ec1f1f170ab2240b42de39f77e54e5 not found: ID does not exist" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.304546 4813 scope.go:117] "RemoveContainer" containerID="b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d" Dec 02 10:57:19 crc kubenswrapper[4813]: E1202 10:57:19.304908 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d\": container with ID starting with b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d not found: ID does not exist" containerID="b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.304959 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d"} err="failed to get container status \"b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d\": rpc error: code = NotFound desc = could not find container \"b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d\": container with ID starting with b095aa97d372ce00534526c5483b92c3d087c083c676d2ff21f7e63187f47a0d not found: ID does not exist" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.304994 4813 scope.go:117] "RemoveContainer" containerID="09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6" Dec 02 10:57:19 crc kubenswrapper[4813]: E1202 10:57:19.305311 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6\": container with ID starting with 09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6 not found: ID does not exist" containerID="09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6" Dec 02 10:57:19 crc kubenswrapper[4813]: I1202 10:57:19.305348 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6"} err="failed to get container status \"09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6\": rpc error: code = NotFound desc = could not find container \"09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6\": container with ID starting with 09e75039ab1bde7a6c0f4b809a90efc225d9b8a5d68c815febe376e5ff6511e6 not found: ID does not exist" Dec 02 10:57:20 crc kubenswrapper[4813]: I1202 10:57:20.078634 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" path="/var/lib/kubelet/pods/7f77bef6-b68b-4e42-a14c-b6dc01723d88/volumes" Dec 02 10:57:20 crc kubenswrapper[4813]: I1202 10:57:20.948717 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jmcs9"] Dec 02 10:57:20 crc kubenswrapper[4813]: I1202 10:57:20.950242 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jmcs9" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="registry-server" containerID="cri-o://db274015a70a4c7074ab48aae22507b065fea811f95c585a8e543687679b66ba" gracePeriod=2 Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.223015 4813 generic.go:334] "Generic (PLEG): container finished" podID="aae6e478-d365-475b-9ecb-58b52b16f788" containerID="db274015a70a4c7074ab48aae22507b065fea811f95c585a8e543687679b66ba" exitCode=0 Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.223358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmcs9" event={"ID":"aae6e478-d365-475b-9ecb-58b52b16f788","Type":"ContainerDied","Data":"db274015a70a4c7074ab48aae22507b065fea811f95c585a8e543687679b66ba"} Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.376137 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.462372 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx6bp\" (UniqueName: \"kubernetes.io/projected/aae6e478-d365-475b-9ecb-58b52b16f788-kube-api-access-fx6bp\") pod \"aae6e478-d365-475b-9ecb-58b52b16f788\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.462714 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-utilities\") pod \"aae6e478-d365-475b-9ecb-58b52b16f788\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.462803 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-catalog-content\") pod \"aae6e478-d365-475b-9ecb-58b52b16f788\" (UID: \"aae6e478-d365-475b-9ecb-58b52b16f788\") " Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.464194 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-utilities" (OuterVolumeSpecName: "utilities") pod "aae6e478-d365-475b-9ecb-58b52b16f788" (UID: "aae6e478-d365-475b-9ecb-58b52b16f788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.469193 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae6e478-d365-475b-9ecb-58b52b16f788-kube-api-access-fx6bp" (OuterVolumeSpecName: "kube-api-access-fx6bp") pod "aae6e478-d365-475b-9ecb-58b52b16f788" (UID: "aae6e478-d365-475b-9ecb-58b52b16f788"). InnerVolumeSpecName "kube-api-access-fx6bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.534091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aae6e478-d365-475b-9ecb-58b52b16f788" (UID: "aae6e478-d365-475b-9ecb-58b52b16f788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.564794 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.564819 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx6bp\" (UniqueName: \"kubernetes.io/projected/aae6e478-d365-475b-9ecb-58b52b16f788-kube-api-access-fx6bp\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:21 crc kubenswrapper[4813]: I1202 10:57:21.564829 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae6e478-d365-475b-9ecb-58b52b16f788-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:22 crc kubenswrapper[4813]: I1202 10:57:22.069048 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:57:22 crc kubenswrapper[4813]: E1202 10:57:22.069641 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:57:22 crc kubenswrapper[4813]: I1202 10:57:22.237556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jmcs9" event={"ID":"aae6e478-d365-475b-9ecb-58b52b16f788","Type":"ContainerDied","Data":"f3cf11cb5a1aa1f057a0bceafd991207d5654a83bfdee30189413a62f3c93c61"} Dec 02 10:57:22 crc kubenswrapper[4813]: I1202 10:57:22.237649 4813 scope.go:117] "RemoveContainer" containerID="db274015a70a4c7074ab48aae22507b065fea811f95c585a8e543687679b66ba" Dec 02 10:57:22 crc kubenswrapper[4813]: I1202 10:57:22.237631 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jmcs9" Dec 02 10:57:22 crc kubenswrapper[4813]: I1202 10:57:22.265369 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jmcs9"] Dec 02 10:57:22 crc kubenswrapper[4813]: I1202 10:57:22.266857 4813 scope.go:117] "RemoveContainer" containerID="106c7e5bd2aa4f0db1a60b6382010dac3b67a9428faabf8fbb9a3668f61abbea" Dec 02 10:57:22 crc kubenswrapper[4813]: I1202 10:57:22.274323 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jmcs9"] Dec 02 10:57:22 crc kubenswrapper[4813]: I1202 10:57:22.288649 4813 scope.go:117] "RemoveContainer" containerID="6bade3c7109cf17be2319cef5fa661f089847ba0958fd71748349cab54fdf9ab" Dec 02 10:57:24 crc kubenswrapper[4813]: I1202 10:57:24.085112 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" path="/var/lib/kubelet/pods/aae6e478-d365-475b-9ecb-58b52b16f788/volumes" Dec 02 10:57:25 crc kubenswrapper[4813]: I1202 10:57:25.269497 4813 generic.go:334] "Generic (PLEG): container finished" podID="6c99184e-d396-4734-985d-0f4312e5f82b" containerID="0735344ce05320aa48ead33b03c0b6218616a694da38ebba327e33a2223c6c71" exitCode=0 Dec 02 10:57:25 crc kubenswrapper[4813]: I1202 10:57:25.269850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" event={"ID":"6c99184e-d396-4734-985d-0f4312e5f82b","Type":"ContainerDied","Data":"0735344ce05320aa48ead33b03c0b6218616a694da38ebba327e33a2223c6c71"} Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.723038 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.874202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qj4x\" (UniqueName: \"kubernetes.io/projected/6c99184e-d396-4734-985d-0f4312e5f82b-kube-api-access-5qj4x\") pod \"6c99184e-d396-4734-985d-0f4312e5f82b\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.874390 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ceph\") pod \"6c99184e-d396-4734-985d-0f4312e5f82b\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.874423 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ssh-key\") pod \"6c99184e-d396-4734-985d-0f4312e5f82b\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.874502 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-inventory\") pod \"6c99184e-d396-4734-985d-0f4312e5f82b\" (UID: \"6c99184e-d396-4734-985d-0f4312e5f82b\") " Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.881268 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ceph" (OuterVolumeSpecName: "ceph") pod "6c99184e-d396-4734-985d-0f4312e5f82b" (UID: "6c99184e-d396-4734-985d-0f4312e5f82b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.881303 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c99184e-d396-4734-985d-0f4312e5f82b-kube-api-access-5qj4x" (OuterVolumeSpecName: "kube-api-access-5qj4x") pod "6c99184e-d396-4734-985d-0f4312e5f82b" (UID: "6c99184e-d396-4734-985d-0f4312e5f82b"). InnerVolumeSpecName "kube-api-access-5qj4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.910921 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c99184e-d396-4734-985d-0f4312e5f82b" (UID: "6c99184e-d396-4734-985d-0f4312e5f82b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.912881 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-inventory" (OuterVolumeSpecName: "inventory") pod "6c99184e-d396-4734-985d-0f4312e5f82b" (UID: "6c99184e-d396-4734-985d-0f4312e5f82b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.977871 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.977909 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.977922 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c99184e-d396-4734-985d-0f4312e5f82b-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:26 crc kubenswrapper[4813]: I1202 10:57:26.977939 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qj4x\" (UniqueName: \"kubernetes.io/projected/6c99184e-d396-4734-985d-0f4312e5f82b-kube-api-access-5qj4x\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.286845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" event={"ID":"6c99184e-d396-4734-985d-0f4312e5f82b","Type":"ContainerDied","Data":"6a4b02e10e1a346e94851bac7b8da3c3f7f2fdc68b6a4a6edc26f9bebd76addb"} Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.287185 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a4b02e10e1a346e94851bac7b8da3c3f7f2fdc68b6a4a6edc26f9bebd76addb" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.286902 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rg249" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.410979 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ppbvl"] Dec 02 10:57:27 crc kubenswrapper[4813]: E1202 10:57:27.411383 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="extract-content" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411400 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="extract-content" Dec 02 10:57:27 crc kubenswrapper[4813]: E1202 10:57:27.411412 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c99184e-d396-4734-985d-0f4312e5f82b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411424 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c99184e-d396-4734-985d-0f4312e5f82b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:57:27 crc kubenswrapper[4813]: E1202 10:57:27.411446 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="registry-server" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411455 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="registry-server" Dec 02 10:57:27 crc kubenswrapper[4813]: E1202 10:57:27.411478 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="extract-utilities" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411487 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="extract-utilities" Dec 02 10:57:27 crc kubenswrapper[4813]: E1202 10:57:27.411510 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="registry-server" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411522 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="registry-server" Dec 02 10:57:27 crc kubenswrapper[4813]: E1202 10:57:27.411539 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="extract-content" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411551 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="extract-content" Dec 02 10:57:27 crc kubenswrapper[4813]: E1202 10:57:27.411587 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="extract-utilities" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411601 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="extract-utilities" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411817 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae6e478-d365-475b-9ecb-58b52b16f788" containerName="registry-server" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411838 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f77bef6-b68b-4e42-a14c-b6dc01723d88" containerName="registry-server" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.411865 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c99184e-d396-4734-985d-0f4312e5f82b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.412703 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.415304 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.415306 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.415400 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.415498 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.415754 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.424239 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ppbvl"] Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.587413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ceph\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.587480 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqld\" (UniqueName: \"kubernetes.io/projected/9eb11fe4-0504-4a53-a627-a1314b1115c5-kube-api-access-zkqld\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.587583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.587645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.688807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.688938 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ceph\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.688967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqld\" (UniqueName: \"kubernetes.io/projected/9eb11fe4-0504-4a53-a627-a1314b1115c5-kube-api-access-zkqld\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.689014 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.692139 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.693260 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ceph\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.694149 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.707605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqld\" (UniqueName: \"kubernetes.io/projected/9eb11fe4-0504-4a53-a627-a1314b1115c5-kube-api-access-zkqld\") pod \"ssh-known-hosts-edpm-deployment-ppbvl\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:27 crc kubenswrapper[4813]: I1202 10:57:27.734478 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:28 crc kubenswrapper[4813]: I1202 10:57:28.253219 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ppbvl"] Dec 02 10:57:28 crc kubenswrapper[4813]: I1202 10:57:28.295830 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" event={"ID":"9eb11fe4-0504-4a53-a627-a1314b1115c5","Type":"ContainerStarted","Data":"5b08395e4274ca8813b9f453ece21d6ba8ca5c1d28f6cd03b7061e869e027b33"} Dec 02 10:57:29 crc kubenswrapper[4813]: I1202 10:57:29.307497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" event={"ID":"9eb11fe4-0504-4a53-a627-a1314b1115c5","Type":"ContainerStarted","Data":"a86572732fd3089a0bbe82130441e55d28a2a5ccbf5b37e7facc6f7af407e165"} Dec 02 10:57:29 crc kubenswrapper[4813]: I1202 10:57:29.328003 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" podStartSLOduration=1.8375178349999999 podStartE2EDuration="2.327982953s" podCreationTimestamp="2025-12-02 10:57:27 +0000 UTC" firstStartedPulling="2025-12-02 10:57:28.257524195 +0000 UTC m=+2972.452698497" lastFinishedPulling="2025-12-02 10:57:28.747989313 +0000 UTC m=+2972.943163615" observedRunningTime="2025-12-02 10:57:29.321614902 +0000 UTC m=+2973.516789204" watchObservedRunningTime="2025-12-02 10:57:29.327982953 +0000 UTC m=+2973.523157275" Dec 02 10:57:36 crc kubenswrapper[4813]: I1202 10:57:36.078857 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:57:36 crc kubenswrapper[4813]: E1202 10:57:36.080271 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:57:39 crc kubenswrapper[4813]: I1202 10:57:39.399791 4813 generic.go:334] "Generic (PLEG): container finished" podID="9eb11fe4-0504-4a53-a627-a1314b1115c5" containerID="a86572732fd3089a0bbe82130441e55d28a2a5ccbf5b37e7facc6f7af407e165" exitCode=0 Dec 02 10:57:39 crc kubenswrapper[4813]: I1202 10:57:39.399865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" event={"ID":"9eb11fe4-0504-4a53-a627-a1314b1115c5","Type":"ContainerDied","Data":"a86572732fd3089a0bbe82130441e55d28a2a5ccbf5b37e7facc6f7af407e165"} Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.774941 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.866169 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ceph\") pod \"9eb11fe4-0504-4a53-a627-a1314b1115c5\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.866309 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkqld\" (UniqueName: \"kubernetes.io/projected/9eb11fe4-0504-4a53-a627-a1314b1115c5-kube-api-access-zkqld\") pod \"9eb11fe4-0504-4a53-a627-a1314b1115c5\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.866421 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-inventory-0\") pod \"9eb11fe4-0504-4a53-a627-a1314b1115c5\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.866457 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ssh-key-openstack-edpm-ipam\") pod \"9eb11fe4-0504-4a53-a627-a1314b1115c5\" (UID: \"9eb11fe4-0504-4a53-a627-a1314b1115c5\") " Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.880433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ceph" (OuterVolumeSpecName: "ceph") pod "9eb11fe4-0504-4a53-a627-a1314b1115c5" (UID: "9eb11fe4-0504-4a53-a627-a1314b1115c5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.880660 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb11fe4-0504-4a53-a627-a1314b1115c5-kube-api-access-zkqld" (OuterVolumeSpecName: "kube-api-access-zkqld") pod "9eb11fe4-0504-4a53-a627-a1314b1115c5" (UID: "9eb11fe4-0504-4a53-a627-a1314b1115c5"). InnerVolumeSpecName "kube-api-access-zkqld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.899670 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9eb11fe4-0504-4a53-a627-a1314b1115c5" (UID: "9eb11fe4-0504-4a53-a627-a1314b1115c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.906955 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9eb11fe4-0504-4a53-a627-a1314b1115c5" (UID: "9eb11fe4-0504-4a53-a627-a1314b1115c5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.968990 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkqld\" (UniqueName: \"kubernetes.io/projected/9eb11fe4-0504-4a53-a627-a1314b1115c5-kube-api-access-zkqld\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.969031 4813 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.969046 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:40 crc kubenswrapper[4813]: I1202 10:57:40.969059 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb11fe4-0504-4a53-a627-a1314b1115c5-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.421544 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" event={"ID":"9eb11fe4-0504-4a53-a627-a1314b1115c5","Type":"ContainerDied","Data":"5b08395e4274ca8813b9f453ece21d6ba8ca5c1d28f6cd03b7061e869e027b33"} Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.421626 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b08395e4274ca8813b9f453ece21d6ba8ca5c1d28f6cd03b7061e869e027b33" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.421748 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ppbvl" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.533311 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q"] Dec 02 10:57:41 crc kubenswrapper[4813]: E1202 10:57:41.533769 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb11fe4-0504-4a53-a627-a1314b1115c5" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.533790 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb11fe4-0504-4a53-a627-a1314b1115c5" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.533994 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb11fe4-0504-4a53-a627-a1314b1115c5" containerName="ssh-known-hosts-edpm-deployment" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.535663 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.538228 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.538531 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.538699 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.538828 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.538881 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.547544 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q"] Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.580871 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.580934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.580958 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.581360 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cww5w\" (UniqueName: \"kubernetes.io/projected/545847c6-6495-4189-84ae-d6d6e6f03097-kube-api-access-cww5w\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.683447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.683556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.683609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.683794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cww5w\" (UniqueName: \"kubernetes.io/projected/545847c6-6495-4189-84ae-d6d6e6f03097-kube-api-access-cww5w\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.687730 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.688332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.690389 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.704454 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cww5w\" (UniqueName: \"kubernetes.io/projected/545847c6-6495-4189-84ae-d6d6e6f03097-kube-api-access-cww5w\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9577q\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:41 crc kubenswrapper[4813]: I1202 10:57:41.854745 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:42 crc kubenswrapper[4813]: I1202 10:57:42.464327 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q"] Dec 02 10:57:43 crc kubenswrapper[4813]: I1202 10:57:43.458149 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" event={"ID":"545847c6-6495-4189-84ae-d6d6e6f03097","Type":"ContainerStarted","Data":"a2556660098ba784251d68851850d094b20024d02a050ebf423c3255dd0f20b6"} Dec 02 10:57:43 crc kubenswrapper[4813]: I1202 10:57:43.459169 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" event={"ID":"545847c6-6495-4189-84ae-d6d6e6f03097","Type":"ContainerStarted","Data":"e5a78a597534b62d756f331337091f0c9b9ea5799d2db880f098ee3b26d8e7bd"} Dec 02 10:57:43 crc kubenswrapper[4813]: I1202 10:57:43.492892 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" podStartSLOduration=1.976788379 podStartE2EDuration="2.492871233s" podCreationTimestamp="2025-12-02 10:57:41 +0000 UTC" firstStartedPulling="2025-12-02 10:57:42.470216045 +0000 UTC m=+2986.665390357" lastFinishedPulling="2025-12-02 10:57:42.986298909 +0000 UTC m=+2987.181473211" observedRunningTime="2025-12-02 10:57:43.483415634 +0000 UTC m=+2987.678589946" watchObservedRunningTime="2025-12-02 10:57:43.492871233 +0000 UTC m=+2987.688045545" Dec 02 10:57:48 crc kubenswrapper[4813]: I1202 10:57:48.068624 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:57:48 crc kubenswrapper[4813]: E1202 10:57:48.069241 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:57:51 crc kubenswrapper[4813]: I1202 10:57:51.533162 4813 generic.go:334] "Generic (PLEG): container finished" podID="545847c6-6495-4189-84ae-d6d6e6f03097" containerID="a2556660098ba784251d68851850d094b20024d02a050ebf423c3255dd0f20b6" exitCode=0 Dec 02 10:57:51 crc kubenswrapper[4813]: I1202 10:57:51.533291 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" event={"ID":"545847c6-6495-4189-84ae-d6d6e6f03097","Type":"ContainerDied","Data":"a2556660098ba784251d68851850d094b20024d02a050ebf423c3255dd0f20b6"} Dec 02 10:57:52 crc kubenswrapper[4813]: I1202 10:57:52.976396 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.101704 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ssh-key\") pod \"545847c6-6495-4189-84ae-d6d6e6f03097\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.101825 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ceph\") pod \"545847c6-6495-4189-84ae-d6d6e6f03097\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.101871 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-inventory\") pod \"545847c6-6495-4189-84ae-d6d6e6f03097\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.101912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cww5w\" (UniqueName: \"kubernetes.io/projected/545847c6-6495-4189-84ae-d6d6e6f03097-kube-api-access-cww5w\") pod \"545847c6-6495-4189-84ae-d6d6e6f03097\" (UID: \"545847c6-6495-4189-84ae-d6d6e6f03097\") " Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.109159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ceph" (OuterVolumeSpecName: "ceph") pod "545847c6-6495-4189-84ae-d6d6e6f03097" (UID: "545847c6-6495-4189-84ae-d6d6e6f03097"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.109646 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545847c6-6495-4189-84ae-d6d6e6f03097-kube-api-access-cww5w" (OuterVolumeSpecName: "kube-api-access-cww5w") pod "545847c6-6495-4189-84ae-d6d6e6f03097" (UID: "545847c6-6495-4189-84ae-d6d6e6f03097"). InnerVolumeSpecName "kube-api-access-cww5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.136958 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-inventory" (OuterVolumeSpecName: "inventory") pod "545847c6-6495-4189-84ae-d6d6e6f03097" (UID: "545847c6-6495-4189-84ae-d6d6e6f03097"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.149639 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "545847c6-6495-4189-84ae-d6d6e6f03097" (UID: "545847c6-6495-4189-84ae-d6d6e6f03097"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.204333 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.204378 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.204406 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/545847c6-6495-4189-84ae-d6d6e6f03097-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.204420 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cww5w\" (UniqueName: \"kubernetes.io/projected/545847c6-6495-4189-84ae-d6d6e6f03097-kube-api-access-cww5w\") on node \"crc\" DevicePath \"\"" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.552401 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" event={"ID":"545847c6-6495-4189-84ae-d6d6e6f03097","Type":"ContainerDied","Data":"e5a78a597534b62d756f331337091f0c9b9ea5799d2db880f098ee3b26d8e7bd"} Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.552438 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a78a597534b62d756f331337091f0c9b9ea5799d2db880f098ee3b26d8e7bd" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.552469 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9577q" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.644904 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz"] Dec 02 10:57:53 crc kubenswrapper[4813]: E1202 10:57:53.645411 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545847c6-6495-4189-84ae-d6d6e6f03097" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.645443 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="545847c6-6495-4189-84ae-d6d6e6f03097" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.645760 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="545847c6-6495-4189-84ae-d6d6e6f03097" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.646655 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.649515 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.650181 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.656327 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.656359 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.656476 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.663878 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz"] Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.814696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.814764 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kts9\" (UniqueName: \"kubernetes.io/projected/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-kube-api-access-8kts9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.814814 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.814836 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.916244 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.916294 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.916417 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.916470 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kts9\" (UniqueName: \"kubernetes.io/projected/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-kube-api-access-8kts9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.920422 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.921793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.923302 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.934666 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kts9\" (UniqueName: \"kubernetes.io/projected/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-kube-api-access-8kts9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:53 crc kubenswrapper[4813]: I1202 10:57:53.967582 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:57:54 crc kubenswrapper[4813]: I1202 10:57:54.482443 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz"] Dec 02 10:57:54 crc kubenswrapper[4813]: I1202 10:57:54.561468 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" event={"ID":"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3","Type":"ContainerStarted","Data":"e60fe26c9f7a1c39330cb5c02d287fc561a1cc2b0301c14b69a598bf4666c9d3"} Dec 02 10:57:55 crc kubenswrapper[4813]: I1202 10:57:55.571379 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" event={"ID":"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3","Type":"ContainerStarted","Data":"bbef49bbd69ea254de6678bfbcbf3d112b443b7df71cbe26debf37bde4fe687e"} Dec 02 10:57:55 crc kubenswrapper[4813]: I1202 10:57:55.592621 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" podStartSLOduration=2.05264602 podStartE2EDuration="2.592604683s" podCreationTimestamp="2025-12-02 10:57:53 +0000 UTC" firstStartedPulling="2025-12-02 10:57:54.480197584 +0000 UTC m=+2998.675371886" lastFinishedPulling="2025-12-02 10:57:55.020156247 +0000 UTC m=+2999.215330549" observedRunningTime="2025-12-02 10:57:55.587477097 +0000 UTC m=+2999.782651409" watchObservedRunningTime="2025-12-02 10:57:55.592604683 +0000 UTC m=+2999.787778985" Dec 02 10:58:01 crc kubenswrapper[4813]: I1202 10:58:01.068804 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:58:01 crc kubenswrapper[4813]: E1202 10:58:01.070395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:58:05 crc kubenswrapper[4813]: I1202 10:58:05.653427 4813 generic.go:334] "Generic (PLEG): container finished" podID="bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" containerID="bbef49bbd69ea254de6678bfbcbf3d112b443b7df71cbe26debf37bde4fe687e" exitCode=0 Dec 02 10:58:05 crc kubenswrapper[4813]: I1202 10:58:05.653548 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" event={"ID":"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3","Type":"ContainerDied","Data":"bbef49bbd69ea254de6678bfbcbf3d112b443b7df71cbe26debf37bde4fe687e"} Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.093410 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.261292 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ceph\") pod \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.261396 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-inventory\") pod \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.261491 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key\") pod \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.261521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kts9\" (UniqueName: \"kubernetes.io/projected/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-kube-api-access-8kts9\") pod \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.266783 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-kube-api-access-8kts9" (OuterVolumeSpecName: "kube-api-access-8kts9") pod "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" (UID: "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3"). InnerVolumeSpecName "kube-api-access-8kts9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.267452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ceph" (OuterVolumeSpecName: "ceph") pod "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" (UID: "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:07 crc kubenswrapper[4813]: E1202 10:58:07.289870 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key podName:bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3 nodeName:}" failed. No retries permitted until 2025-12-02 10:58:07.789831136 +0000 UTC m=+3011.985005438 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key") pod "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" (UID: "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3") : error deleting /var/lib/kubelet/pods/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3/volume-subpaths: remove /var/lib/kubelet/pods/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3/volume-subpaths: no such file or directory Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.292374 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-inventory" (OuterVolumeSpecName: "inventory") pod "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" (UID: "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.363710 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kts9\" (UniqueName: \"kubernetes.io/projected/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-kube-api-access-8kts9\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.363746 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.363756 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.674607 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" event={"ID":"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3","Type":"ContainerDied","Data":"e60fe26c9f7a1c39330cb5c02d287fc561a1cc2b0301c14b69a598bf4666c9d3"} Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.674667 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e60fe26c9f7a1c39330cb5c02d287fc561a1cc2b0301c14b69a598bf4666c9d3" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.674693 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.849946 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn"] Dec 02 10:58:07 crc kubenswrapper[4813]: E1202 10:58:07.850436 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.850462 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.850660 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.851333 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.854472 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.856259 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.856509 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.873947 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn"] Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.875332 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key\") pod \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\" (UID: \"bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3\") " Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.879208 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3" (UID: "bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981429 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981470 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981631 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981775 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981818 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981902 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.981992 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.982041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwzf\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-kube-api-access-2mwzf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.982087 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.982138 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.982182 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:07 crc kubenswrapper[4813]: I1202 10:58:07.982303 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084266 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084372 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mwzf\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-kube-api-access-2mwzf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084431 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084468 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084513 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084596 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.084624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.088607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.088619 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.089051 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.089845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.090366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.090550 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.090968 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.091119 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.091485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.091792 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.092035 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.092514 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.099973 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mwzf\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-kube-api-access-2mwzf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htwnn\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.175341 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:08 crc kubenswrapper[4813]: I1202 10:58:08.808796 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn"] Dec 02 10:58:08 crc kubenswrapper[4813]: W1202 10:58:08.815474 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod485c1630_9c8a_4474_b75c_4bfed04bcea9.slice/crio-e180b15fc09c43d59aa888d722f0790a9d4bae061d232e6ff359c52d25eac8b0 WatchSource:0}: Error finding container e180b15fc09c43d59aa888d722f0790a9d4bae061d232e6ff359c52d25eac8b0: Status 404 returned error can't find the container with id e180b15fc09c43d59aa888d722f0790a9d4bae061d232e6ff359c52d25eac8b0 Dec 02 10:58:09 crc kubenswrapper[4813]: I1202 10:58:09.713993 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" event={"ID":"485c1630-9c8a-4474-b75c-4bfed04bcea9","Type":"ContainerStarted","Data":"5fb0e5eada5b1d562d7c908cae336c72bd63c7347f33cd77241262f3807c516b"} Dec 02 10:58:09 crc kubenswrapper[4813]: I1202 10:58:09.714604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" event={"ID":"485c1630-9c8a-4474-b75c-4bfed04bcea9","Type":"ContainerStarted","Data":"e180b15fc09c43d59aa888d722f0790a9d4bae061d232e6ff359c52d25eac8b0"} Dec 02 10:58:09 crc kubenswrapper[4813]: I1202 10:58:09.738368 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" podStartSLOduration=2.204403276 podStartE2EDuration="2.738348668s" podCreationTimestamp="2025-12-02 10:58:07 +0000 UTC" firstStartedPulling="2025-12-02 10:58:08.818211573 +0000 UTC m=+3013.013385875" lastFinishedPulling="2025-12-02 10:58:09.352156955 +0000 UTC m=+3013.547331267" observedRunningTime="2025-12-02 10:58:09.73385997 +0000 UTC m=+3013.929034272" watchObservedRunningTime="2025-12-02 10:58:09.738348668 +0000 UTC m=+3013.933522970" Dec 02 10:58:12 crc kubenswrapper[4813]: I1202 10:58:12.068762 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:58:12 crc kubenswrapper[4813]: E1202 10:58:12.069494 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:58:24 crc kubenswrapper[4813]: I1202 10:58:24.068698 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:58:24 crc kubenswrapper[4813]: E1202 10:58:24.069723 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:58:38 crc kubenswrapper[4813]: I1202 10:58:38.069017 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:58:38 crc kubenswrapper[4813]: E1202 10:58:38.069942 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:58:44 crc kubenswrapper[4813]: I1202 10:58:44.082289 4813 generic.go:334] "Generic (PLEG): container finished" podID="485c1630-9c8a-4474-b75c-4bfed04bcea9" containerID="5fb0e5eada5b1d562d7c908cae336c72bd63c7347f33cd77241262f3807c516b" exitCode=0 Dec 02 10:58:44 crc kubenswrapper[4813]: I1202 10:58:44.095702 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" event={"ID":"485c1630-9c8a-4474-b75c-4bfed04bcea9","Type":"ContainerDied","Data":"5fb0e5eada5b1d562d7c908cae336c72bd63c7347f33cd77241262f3807c516b"} Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.492516 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623056 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623137 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mwzf\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-kube-api-access-2mwzf\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623189 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-bootstrap-combined-ca-bundle\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623240 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-libvirt-combined-ca-bundle\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623290 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ceph\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623338 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ssh-key\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-neutron-metadata-combined-ca-bundle\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623435 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ovn-combined-ca-bundle\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623465 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-repo-setup-combined-ca-bundle\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623502 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-nova-combined-ca-bundle\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623681 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.623740 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-inventory\") pod \"485c1630-9c8a-4474-b75c-4bfed04bcea9\" (UID: \"485c1630-9c8a-4474-b75c-4bfed04bcea9\") " Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.629934 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.630031 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.630395 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-kube-api-access-2mwzf" (OuterVolumeSpecName: "kube-api-access-2mwzf") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "kube-api-access-2mwzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.630504 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.630653 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.630765 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.631191 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.632016 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.632402 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.632839 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.635556 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ceph" (OuterVolumeSpecName: "ceph") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.663268 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.663899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-inventory" (OuterVolumeSpecName: "inventory") pod "485c1630-9c8a-4474-b75c-4bfed04bcea9" (UID: "485c1630-9c8a-4474-b75c-4bfed04bcea9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.727178 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.727251 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mwzf\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-kube-api-access-2mwzf\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.727272 4813 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.727299 4813 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.740310 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.740360 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.740774 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.741020 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.741059 4813 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.741472 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.743464 4813 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.743537 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/485c1630-9c8a-4474-b75c-4bfed04bcea9-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:45 crc kubenswrapper[4813]: I1202 10:58:45.743561 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/485c1630-9c8a-4474-b75c-4bfed04bcea9-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.106802 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" event={"ID":"485c1630-9c8a-4474-b75c-4bfed04bcea9","Type":"ContainerDied","Data":"e180b15fc09c43d59aa888d722f0790a9d4bae061d232e6ff359c52d25eac8b0"} Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.106846 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e180b15fc09c43d59aa888d722f0790a9d4bae061d232e6ff359c52d25eac8b0" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.106928 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htwnn" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.249588 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr"] Dec 02 10:58:46 crc kubenswrapper[4813]: E1202 10:58:46.250244 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485c1630-9c8a-4474-b75c-4bfed04bcea9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.250273 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="485c1630-9c8a-4474-b75c-4bfed04bcea9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.250705 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="485c1630-9c8a-4474-b75c-4bfed04bcea9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.251770 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.256732 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.257136 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.257378 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.257659 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.260024 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr"] Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.262183 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.353387 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.353504 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.353811 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.353946 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbjg\" (UniqueName: \"kubernetes.io/projected/0d080fa4-1ffb-4c15-beb2-110224e86841-kube-api-access-pwbjg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.456332 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.456609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.456702 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbjg\" (UniqueName: \"kubernetes.io/projected/0d080fa4-1ffb-4c15-beb2-110224e86841-kube-api-access-pwbjg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.456790 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.460551 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.463818 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.464694 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.478054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbjg\" (UniqueName: \"kubernetes.io/projected/0d080fa4-1ffb-4c15-beb2-110224e86841-kube-api-access-pwbjg\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:46 crc kubenswrapper[4813]: I1202 10:58:46.575541 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:47 crc kubenswrapper[4813]: I1202 10:58:47.091367 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr"] Dec 02 10:58:47 crc kubenswrapper[4813]: W1202 10:58:47.098782 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d080fa4_1ffb_4c15_beb2_110224e86841.slice/crio-4fe696b5b5b98c5aa50991f004e0565bb8b5ba93501133c49e48a7d61c566d31 WatchSource:0}: Error finding container 4fe696b5b5b98c5aa50991f004e0565bb8b5ba93501133c49e48a7d61c566d31: Status 404 returned error can't find the container with id 4fe696b5b5b98c5aa50991f004e0565bb8b5ba93501133c49e48a7d61c566d31 Dec 02 10:58:47 crc kubenswrapper[4813]: I1202 10:58:47.116532 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" event={"ID":"0d080fa4-1ffb-4c15-beb2-110224e86841","Type":"ContainerStarted","Data":"4fe696b5b5b98c5aa50991f004e0565bb8b5ba93501133c49e48a7d61c566d31"} Dec 02 10:58:49 crc kubenswrapper[4813]: I1202 10:58:49.132934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" event={"ID":"0d080fa4-1ffb-4c15-beb2-110224e86841","Type":"ContainerStarted","Data":"2bb1dffb4a06d3d4defc4ccc2ea01c8c26f35389733ba70c1013cca86e1c0248"} Dec 02 10:58:49 crc kubenswrapper[4813]: I1202 10:58:49.152396 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" podStartSLOduration=2.326961491 podStartE2EDuration="3.152376704s" podCreationTimestamp="2025-12-02 10:58:46 +0000 UTC" firstStartedPulling="2025-12-02 10:58:47.103402564 +0000 UTC m=+3051.298576876" lastFinishedPulling="2025-12-02 10:58:47.928817747 +0000 UTC m=+3052.123992089" observedRunningTime="2025-12-02 10:58:49.147982389 +0000 UTC m=+3053.343156731" watchObservedRunningTime="2025-12-02 10:58:49.152376704 +0000 UTC m=+3053.347551006" Dec 02 10:58:50 crc kubenswrapper[4813]: I1202 10:58:50.069151 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:58:50 crc kubenswrapper[4813]: E1202 10:58:50.073710 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:58:54 crc kubenswrapper[4813]: I1202 10:58:54.182772 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d080fa4-1ffb-4c15-beb2-110224e86841" containerID="2bb1dffb4a06d3d4defc4ccc2ea01c8c26f35389733ba70c1013cca86e1c0248" exitCode=0 Dec 02 10:58:54 crc kubenswrapper[4813]: I1202 10:58:54.182850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" event={"ID":"0d080fa4-1ffb-4c15-beb2-110224e86841","Type":"ContainerDied","Data":"2bb1dffb4a06d3d4defc4ccc2ea01c8c26f35389733ba70c1013cca86e1c0248"} Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.579783 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.622328 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwbjg\" (UniqueName: \"kubernetes.io/projected/0d080fa4-1ffb-4c15-beb2-110224e86841-kube-api-access-pwbjg\") pod \"0d080fa4-1ffb-4c15-beb2-110224e86841\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.622426 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-inventory\") pod \"0d080fa4-1ffb-4c15-beb2-110224e86841\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.622472 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ceph\") pod \"0d080fa4-1ffb-4c15-beb2-110224e86841\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.622561 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ssh-key\") pod \"0d080fa4-1ffb-4c15-beb2-110224e86841\" (UID: \"0d080fa4-1ffb-4c15-beb2-110224e86841\") " Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.627990 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d080fa4-1ffb-4c15-beb2-110224e86841-kube-api-access-pwbjg" (OuterVolumeSpecName: "kube-api-access-pwbjg") pod "0d080fa4-1ffb-4c15-beb2-110224e86841" (UID: "0d080fa4-1ffb-4c15-beb2-110224e86841"). InnerVolumeSpecName "kube-api-access-pwbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.628651 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ceph" (OuterVolumeSpecName: "ceph") pod "0d080fa4-1ffb-4c15-beb2-110224e86841" (UID: "0d080fa4-1ffb-4c15-beb2-110224e86841"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.646889 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0d080fa4-1ffb-4c15-beb2-110224e86841" (UID: "0d080fa4-1ffb-4c15-beb2-110224e86841"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.650527 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-inventory" (OuterVolumeSpecName: "inventory") pod "0d080fa4-1ffb-4c15-beb2-110224e86841" (UID: "0d080fa4-1ffb-4c15-beb2-110224e86841"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.724412 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.724475 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwbjg\" (UniqueName: \"kubernetes.io/projected/0d080fa4-1ffb-4c15-beb2-110224e86841-kube-api-access-pwbjg\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.724489 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:55 crc kubenswrapper[4813]: I1202 10:58:55.724500 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0d080fa4-1ffb-4c15-beb2-110224e86841-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.201855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" event={"ID":"0d080fa4-1ffb-4c15-beb2-110224e86841","Type":"ContainerDied","Data":"4fe696b5b5b98c5aa50991f004e0565bb8b5ba93501133c49e48a7d61c566d31"} Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.201894 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe696b5b5b98c5aa50991f004e0565bb8b5ba93501133c49e48a7d61c566d31" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.201912 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.337874 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj"] Dec 02 10:58:56 crc kubenswrapper[4813]: E1202 10:58:56.338229 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d080fa4-1ffb-4c15-beb2-110224e86841" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.338251 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d080fa4-1ffb-4c15-beb2-110224e86841" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.338436 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d080fa4-1ffb-4c15-beb2-110224e86841" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.339057 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.341803 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.341803 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.341829 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.341830 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.344574 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.346541 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.355466 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj"] Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.436041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.436142 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.436165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.436252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.436330 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6khq\" (UniqueName: \"kubernetes.io/projected/8533db13-ff2a-4a5e-8a6e-30dad8252d93-kube-api-access-w6khq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.436361 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.537521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.537844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.537866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.537915 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.537963 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6khq\" (UniqueName: \"kubernetes.io/projected/8533db13-ff2a-4a5e-8a6e-30dad8252d93-kube-api-access-w6khq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.537982 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.539636 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.544130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.544210 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.544303 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.545408 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.559390 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6khq\" (UniqueName: \"kubernetes.io/projected/8533db13-ff2a-4a5e-8a6e-30dad8252d93-kube-api-access-w6khq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks2vj\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:56 crc kubenswrapper[4813]: I1202 10:58:56.656736 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 10:58:57 crc kubenswrapper[4813]: I1202 10:58:57.007789 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj"] Dec 02 10:58:57 crc kubenswrapper[4813]: W1202 10:58:57.010580 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8533db13_ff2a_4a5e_8a6e_30dad8252d93.slice/crio-c0199511f8c661786c3a912642b204e4036398fc002790e28f6ffce680ac5153 WatchSource:0}: Error finding container c0199511f8c661786c3a912642b204e4036398fc002790e28f6ffce680ac5153: Status 404 returned error can't find the container with id c0199511f8c661786c3a912642b204e4036398fc002790e28f6ffce680ac5153 Dec 02 10:58:57 crc kubenswrapper[4813]: I1202 10:58:57.216328 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" event={"ID":"8533db13-ff2a-4a5e-8a6e-30dad8252d93","Type":"ContainerStarted","Data":"c0199511f8c661786c3a912642b204e4036398fc002790e28f6ffce680ac5153"} Dec 02 10:58:58 crc kubenswrapper[4813]: I1202 10:58:58.226870 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" event={"ID":"8533db13-ff2a-4a5e-8a6e-30dad8252d93","Type":"ContainerStarted","Data":"cc3d1ae5c19d93e1a4b74bc0628f7a297431f3bb9427db56e5e03537acfe9281"} Dec 02 10:58:58 crc kubenswrapper[4813]: I1202 10:58:58.251948 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" podStartSLOduration=1.805290106 podStartE2EDuration="2.251929436s" podCreationTimestamp="2025-12-02 10:58:56 +0000 UTC" firstStartedPulling="2025-12-02 10:58:57.013412346 +0000 UTC m=+3061.208586648" lastFinishedPulling="2025-12-02 10:58:57.460051676 +0000 UTC m=+3061.655225978" observedRunningTime="2025-12-02 10:58:58.240159352 +0000 UTC m=+3062.435333684" watchObservedRunningTime="2025-12-02 10:58:58.251929436 +0000 UTC m=+3062.447103738" Dec 02 10:59:04 crc kubenswrapper[4813]: I1202 10:59:04.068694 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:59:04 crc kubenswrapper[4813]: E1202 10:59:04.070051 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:59:16 crc kubenswrapper[4813]: I1202 10:59:16.075680 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:59:16 crc kubenswrapper[4813]: E1202 10:59:16.076442 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:59:30 crc kubenswrapper[4813]: I1202 10:59:30.243015 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:59:30 crc kubenswrapper[4813]: E1202 10:59:30.243766 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:59:44 crc kubenswrapper[4813]: I1202 10:59:44.068950 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:59:44 crc kubenswrapper[4813]: E1202 10:59:44.070183 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 10:59:59 crc kubenswrapper[4813]: I1202 10:59:59.068005 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 10:59:59 crc kubenswrapper[4813]: E1202 10:59:59.068833 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.169673 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7"] Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.171979 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.174786 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.177495 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.185054 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7"] Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.260624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlt67\" (UniqueName: \"kubernetes.io/projected/507629a1-b497-4708-9533-1fd8c258584c-kube-api-access-jlt67\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.260905 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/507629a1-b497-4708-9533-1fd8c258584c-config-volume\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.261272 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/507629a1-b497-4708-9533-1fd8c258584c-secret-volume\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.363273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/507629a1-b497-4708-9533-1fd8c258584c-secret-volume\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.363350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlt67\" (UniqueName: \"kubernetes.io/projected/507629a1-b497-4708-9533-1fd8c258584c-kube-api-access-jlt67\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.363439 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/507629a1-b497-4708-9533-1fd8c258584c-config-volume\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.364638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/507629a1-b497-4708-9533-1fd8c258584c-config-volume\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.372500 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/507629a1-b497-4708-9533-1fd8c258584c-secret-volume\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.382617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlt67\" (UniqueName: \"kubernetes.io/projected/507629a1-b497-4708-9533-1fd8c258584c-kube-api-access-jlt67\") pod \"collect-profiles-29411220-7hhl7\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.501686 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:00 crc kubenswrapper[4813]: I1202 11:00:00.928601 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7"] Dec 02 11:00:00 crc kubenswrapper[4813]: W1202 11:00:00.930451 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507629a1_b497_4708_9533_1fd8c258584c.slice/crio-2164c0d657afefdeb1a8dde6369332843b4e209450101ffa492bb7815a463820 WatchSource:0}: Error finding container 2164c0d657afefdeb1a8dde6369332843b4e209450101ffa492bb7815a463820: Status 404 returned error can't find the container with id 2164c0d657afefdeb1a8dde6369332843b4e209450101ffa492bb7815a463820 Dec 02 11:00:01 crc kubenswrapper[4813]: I1202 11:00:01.817214 4813 generic.go:334] "Generic (PLEG): container finished" podID="507629a1-b497-4708-9533-1fd8c258584c" containerID="0e54cfd4750cfde723a589c46333703f2e0f0da2c99bb2dd1184895f2c1403c6" exitCode=0 Dec 02 11:00:01 crc kubenswrapper[4813]: I1202 11:00:01.817293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" event={"ID":"507629a1-b497-4708-9533-1fd8c258584c","Type":"ContainerDied","Data":"0e54cfd4750cfde723a589c46333703f2e0f0da2c99bb2dd1184895f2c1403c6"} Dec 02 11:00:01 crc kubenswrapper[4813]: I1202 11:00:01.817542 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" event={"ID":"507629a1-b497-4708-9533-1fd8c258584c","Type":"ContainerStarted","Data":"2164c0d657afefdeb1a8dde6369332843b4e209450101ffa492bb7815a463820"} Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.125817 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.213881 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/507629a1-b497-4708-9533-1fd8c258584c-secret-volume\") pod \"507629a1-b497-4708-9533-1fd8c258584c\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.214038 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/507629a1-b497-4708-9533-1fd8c258584c-config-volume\") pod \"507629a1-b497-4708-9533-1fd8c258584c\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.214148 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlt67\" (UniqueName: \"kubernetes.io/projected/507629a1-b497-4708-9533-1fd8c258584c-kube-api-access-jlt67\") pod \"507629a1-b497-4708-9533-1fd8c258584c\" (UID: \"507629a1-b497-4708-9533-1fd8c258584c\") " Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.214750 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507629a1-b497-4708-9533-1fd8c258584c-config-volume" (OuterVolumeSpecName: "config-volume") pod "507629a1-b497-4708-9533-1fd8c258584c" (UID: "507629a1-b497-4708-9533-1fd8c258584c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.214949 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/507629a1-b497-4708-9533-1fd8c258584c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.219684 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507629a1-b497-4708-9533-1fd8c258584c-kube-api-access-jlt67" (OuterVolumeSpecName: "kube-api-access-jlt67") pod "507629a1-b497-4708-9533-1fd8c258584c" (UID: "507629a1-b497-4708-9533-1fd8c258584c"). InnerVolumeSpecName "kube-api-access-jlt67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.219925 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507629a1-b497-4708-9533-1fd8c258584c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "507629a1-b497-4708-9533-1fd8c258584c" (UID: "507629a1-b497-4708-9533-1fd8c258584c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.316326 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlt67\" (UniqueName: \"kubernetes.io/projected/507629a1-b497-4708-9533-1fd8c258584c-kube-api-access-jlt67\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.316355 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/507629a1-b497-4708-9533-1fd8c258584c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.839091 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" event={"ID":"507629a1-b497-4708-9533-1fd8c258584c","Type":"ContainerDied","Data":"2164c0d657afefdeb1a8dde6369332843b4e209450101ffa492bb7815a463820"} Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.839537 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2164c0d657afefdeb1a8dde6369332843b4e209450101ffa492bb7815a463820" Dec 02 11:00:03 crc kubenswrapper[4813]: I1202 11:00:03.839177 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7" Dec 02 11:00:04 crc kubenswrapper[4813]: I1202 11:00:04.216807 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld"] Dec 02 11:00:04 crc kubenswrapper[4813]: I1202 11:00:04.230256 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411175-2xxld"] Dec 02 11:00:06 crc kubenswrapper[4813]: I1202 11:00:06.083052 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b" path="/var/lib/kubelet/pods/6a4fd0ca-bbfd-4ac5-84b9-668a8feb616b/volumes" Dec 02 11:00:12 crc kubenswrapper[4813]: I1202 11:00:12.929874 4813 generic.go:334] "Generic (PLEG): container finished" podID="8533db13-ff2a-4a5e-8a6e-30dad8252d93" containerID="cc3d1ae5c19d93e1a4b74bc0628f7a297431f3bb9427db56e5e03537acfe9281" exitCode=0 Dec 02 11:00:12 crc kubenswrapper[4813]: I1202 11:00:12.929964 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" event={"ID":"8533db13-ff2a-4a5e-8a6e-30dad8252d93","Type":"ContainerDied","Data":"cc3d1ae5c19d93e1a4b74bc0628f7a297431f3bb9427db56e5e03537acfe9281"} Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.068327 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 11:00:14 crc kubenswrapper[4813]: E1202 11:00:14.069098 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.369243 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.432056 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovncontroller-config-0\") pod \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.432193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovn-combined-ca-bundle\") pod \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.432270 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ssh-key\") pod \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.432433 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ceph\") pod \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.432467 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-inventory\") pod \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.432546 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6khq\" (UniqueName: \"kubernetes.io/projected/8533db13-ff2a-4a5e-8a6e-30dad8252d93-kube-api-access-w6khq\") pod \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\" (UID: \"8533db13-ff2a-4a5e-8a6e-30dad8252d93\") " Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.438154 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ceph" (OuterVolumeSpecName: "ceph") pod "8533db13-ff2a-4a5e-8a6e-30dad8252d93" (UID: "8533db13-ff2a-4a5e-8a6e-30dad8252d93"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.438651 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8533db13-ff2a-4a5e-8a6e-30dad8252d93" (UID: "8533db13-ff2a-4a5e-8a6e-30dad8252d93"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.449310 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8533db13-ff2a-4a5e-8a6e-30dad8252d93-kube-api-access-w6khq" (OuterVolumeSpecName: "kube-api-access-w6khq") pod "8533db13-ff2a-4a5e-8a6e-30dad8252d93" (UID: "8533db13-ff2a-4a5e-8a6e-30dad8252d93"). InnerVolumeSpecName "kube-api-access-w6khq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.456543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8533db13-ff2a-4a5e-8a6e-30dad8252d93" (UID: "8533db13-ff2a-4a5e-8a6e-30dad8252d93"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.457427 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-inventory" (OuterVolumeSpecName: "inventory") pod "8533db13-ff2a-4a5e-8a6e-30dad8252d93" (UID: "8533db13-ff2a-4a5e-8a6e-30dad8252d93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.459446 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8533db13-ff2a-4a5e-8a6e-30dad8252d93" (UID: "8533db13-ff2a-4a5e-8a6e-30dad8252d93"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.535518 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.535565 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.535586 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.535605 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8533db13-ff2a-4a5e-8a6e-30dad8252d93-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.535623 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6khq\" (UniqueName: \"kubernetes.io/projected/8533db13-ff2a-4a5e-8a6e-30dad8252d93-kube-api-access-w6khq\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.535640 4813 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8533db13-ff2a-4a5e-8a6e-30dad8252d93-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.949841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" event={"ID":"8533db13-ff2a-4a5e-8a6e-30dad8252d93","Type":"ContainerDied","Data":"c0199511f8c661786c3a912642b204e4036398fc002790e28f6ffce680ac5153"} Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.949878 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0199511f8c661786c3a912642b204e4036398fc002790e28f6ffce680ac5153" Dec 02 11:00:14 crc kubenswrapper[4813]: I1202 11:00:14.949946 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks2vj" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.050646 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6"] Dec 02 11:00:15 crc kubenswrapper[4813]: E1202 11:00:15.051142 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507629a1-b497-4708-9533-1fd8c258584c" containerName="collect-profiles" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.051169 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="507629a1-b497-4708-9533-1fd8c258584c" containerName="collect-profiles" Dec 02 11:00:15 crc kubenswrapper[4813]: E1202 11:00:15.051195 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8533db13-ff2a-4a5e-8a6e-30dad8252d93" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.051204 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8533db13-ff2a-4a5e-8a6e-30dad8252d93" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.051434 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="507629a1-b497-4708-9533-1fd8c258584c" containerName="collect-profiles" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.051458 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8533db13-ff2a-4a5e-8a6e-30dad8252d93" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.052153 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.054458 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.055372 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.055538 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.055692 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.056114 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.056608 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.056778 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.065328 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6"] Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.146960 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgh74\" (UniqueName: \"kubernetes.io/projected/a4be7c16-2599-4533-9efd-256afaa43b58-kube-api-access-wgh74\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.147030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.147053 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.147095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.147131 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.147180 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.147249 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.249010 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.249269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgh74\" (UniqueName: \"kubernetes.io/projected/a4be7c16-2599-4533-9efd-256afaa43b58-kube-api-access-wgh74\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.249315 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.249337 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.249360 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.249428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.249517 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.254183 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.254194 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.254290 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.254617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.255433 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.257936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.278108 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgh74\" (UniqueName: \"kubernetes.io/projected/a4be7c16-2599-4533-9efd-256afaa43b58-kube-api-access-wgh74\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.368427 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.948865 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6"] Dec 02 11:00:15 crc kubenswrapper[4813]: I1202 11:00:15.958656 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" event={"ID":"a4be7c16-2599-4533-9efd-256afaa43b58","Type":"ContainerStarted","Data":"66bc698c8da7284cd78521e8bc591443fc9718d0b6cf778d83db0406245b9492"} Dec 02 11:00:17 crc kubenswrapper[4813]: I1202 11:00:17.976305 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" event={"ID":"a4be7c16-2599-4533-9efd-256afaa43b58","Type":"ContainerStarted","Data":"81669a76d129d36e88a2d88a83d32d29cbc556e33e553ea28ecedc385866bbbd"} Dec 02 11:00:28 crc kubenswrapper[4813]: I1202 11:00:28.068213 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 11:00:28 crc kubenswrapper[4813]: E1202 11:00:28.069159 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:00:39 crc kubenswrapper[4813]: I1202 11:00:39.068505 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 11:00:39 crc kubenswrapper[4813]: E1202 11:00:39.069219 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:00:54 crc kubenswrapper[4813]: I1202 11:00:54.069518 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 11:00:54 crc kubenswrapper[4813]: E1202 11:00:54.070709 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.167168 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" podStartSLOduration=44.060233242 podStartE2EDuration="45.167144193s" podCreationTimestamp="2025-12-02 11:00:15 +0000 UTC" firstStartedPulling="2025-12-02 11:00:15.948214884 +0000 UTC m=+3140.143389196" lastFinishedPulling="2025-12-02 11:00:17.055125835 +0000 UTC m=+3141.250300147" observedRunningTime="2025-12-02 11:00:17.993498608 +0000 UTC m=+3142.188672920" watchObservedRunningTime="2025-12-02 11:01:00.167144193 +0000 UTC m=+3184.362318515" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.175761 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29411221-wc6vj"] Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.178325 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.186170 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29411221-wc6vj"] Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.313007 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbfg\" (UniqueName: \"kubernetes.io/projected/2c04c0e4-5a90-4287-bcbe-11e190ec6005-kube-api-access-dkbfg\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.313110 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-fernet-keys\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.313140 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-config-data\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.313164 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-combined-ca-bundle\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.415353 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbfg\" (UniqueName: \"kubernetes.io/projected/2c04c0e4-5a90-4287-bcbe-11e190ec6005-kube-api-access-dkbfg\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.415451 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-fernet-keys\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.415477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-config-data\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.415499 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-combined-ca-bundle\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.422865 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-combined-ca-bundle\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.425027 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-config-data\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.425720 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-fernet-keys\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.437267 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbfg\" (UniqueName: \"kubernetes.io/projected/2c04c0e4-5a90-4287-bcbe-11e190ec6005-kube-api-access-dkbfg\") pod \"keystone-cron-29411221-wc6vj\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:00 crc kubenswrapper[4813]: I1202 11:01:00.521382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:01 crc kubenswrapper[4813]: I1202 11:01:01.045426 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29411221-wc6vj"] Dec 02 11:01:01 crc kubenswrapper[4813]: I1202 11:01:01.372296 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411221-wc6vj" event={"ID":"2c04c0e4-5a90-4287-bcbe-11e190ec6005","Type":"ContainerStarted","Data":"bec430fc2ceed816b087b0548d9531004fda9aae28ee39347bfe630a1fbb73ed"} Dec 02 11:01:01 crc kubenswrapper[4813]: I1202 11:01:01.372565 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411221-wc6vj" event={"ID":"2c04c0e4-5a90-4287-bcbe-11e190ec6005","Type":"ContainerStarted","Data":"869b2fd8e6c4a9642b1ee6aedeea643bae873f22ba9bc682a7288323cbb99ef2"} Dec 02 11:01:01 crc kubenswrapper[4813]: I1202 11:01:01.401623 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29411221-wc6vj" podStartSLOduration=1.401601649 podStartE2EDuration="1.401601649s" podCreationTimestamp="2025-12-02 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:01:01.393280553 +0000 UTC m=+3185.588454865" watchObservedRunningTime="2025-12-02 11:01:01.401601649 +0000 UTC m=+3185.596775961" Dec 02 11:01:03 crc kubenswrapper[4813]: I1202 11:01:03.390805 4813 generic.go:334] "Generic (PLEG): container finished" podID="2c04c0e4-5a90-4287-bcbe-11e190ec6005" containerID="bec430fc2ceed816b087b0548d9531004fda9aae28ee39347bfe630a1fbb73ed" exitCode=0 Dec 02 11:01:03 crc kubenswrapper[4813]: I1202 11:01:03.390922 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411221-wc6vj" event={"ID":"2c04c0e4-5a90-4287-bcbe-11e190ec6005","Type":"ContainerDied","Data":"bec430fc2ceed816b087b0548d9531004fda9aae28ee39347bfe630a1fbb73ed"} Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.776259 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.905700 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-fernet-keys\") pod \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.906109 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-config-data\") pod \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.906232 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-combined-ca-bundle\") pod \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.906282 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkbfg\" (UniqueName: \"kubernetes.io/projected/2c04c0e4-5a90-4287-bcbe-11e190ec6005-kube-api-access-dkbfg\") pod \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\" (UID: \"2c04c0e4-5a90-4287-bcbe-11e190ec6005\") " Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.911620 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2c04c0e4-5a90-4287-bcbe-11e190ec6005" (UID: "2c04c0e4-5a90-4287-bcbe-11e190ec6005"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.911766 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c04c0e4-5a90-4287-bcbe-11e190ec6005-kube-api-access-dkbfg" (OuterVolumeSpecName: "kube-api-access-dkbfg") pod "2c04c0e4-5a90-4287-bcbe-11e190ec6005" (UID: "2c04c0e4-5a90-4287-bcbe-11e190ec6005"). InnerVolumeSpecName "kube-api-access-dkbfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.932302 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c04c0e4-5a90-4287-bcbe-11e190ec6005" (UID: "2c04c0e4-5a90-4287-bcbe-11e190ec6005"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:04 crc kubenswrapper[4813]: I1202 11:01:04.959722 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-config-data" (OuterVolumeSpecName: "config-data") pod "2c04c0e4-5a90-4287-bcbe-11e190ec6005" (UID: "2c04c0e4-5a90-4287-bcbe-11e190ec6005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.008785 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.008836 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.008845 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c04c0e4-5a90-4287-bcbe-11e190ec6005-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.008859 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkbfg\" (UniqueName: \"kubernetes.io/projected/2c04c0e4-5a90-4287-bcbe-11e190ec6005-kube-api-access-dkbfg\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.068218 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.414006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411221-wc6vj" event={"ID":"2c04c0e4-5a90-4287-bcbe-11e190ec6005","Type":"ContainerDied","Data":"869b2fd8e6c4a9642b1ee6aedeea643bae873f22ba9bc682a7288323cbb99ef2"} Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.414346 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="869b2fd8e6c4a9642b1ee6aedeea643bae873f22ba9bc682a7288323cbb99ef2" Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.414138 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411221-wc6vj" Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.416969 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"4c3e81ef0544582e3698354dd375931f076f345ee7fd7985e93a5f7436257ada"} Dec 02 11:01:05 crc kubenswrapper[4813]: I1202 11:01:05.958271 4813 scope.go:117] "RemoveContainer" containerID="ca92fbad47d844692898faa73b0825dfa65e01287964691b0cca51a5c224139c" Dec 02 11:01:19 crc kubenswrapper[4813]: I1202 11:01:19.545997 4813 generic.go:334] "Generic (PLEG): container finished" podID="a4be7c16-2599-4533-9efd-256afaa43b58" containerID="81669a76d129d36e88a2d88a83d32d29cbc556e33e553ea28ecedc385866bbbd" exitCode=0 Dec 02 11:01:19 crc kubenswrapper[4813]: I1202 11:01:19.546112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" event={"ID":"a4be7c16-2599-4533-9efd-256afaa43b58","Type":"ContainerDied","Data":"81669a76d129d36e88a2d88a83d32d29cbc556e33e553ea28ecedc385866bbbd"} Dec 02 11:01:20 crc kubenswrapper[4813]: I1202 11:01:20.931598 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.011720 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-metadata-combined-ca-bundle\") pod \"a4be7c16-2599-4533-9efd-256afaa43b58\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.011784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ssh-key\") pod \"a4be7c16-2599-4533-9efd-256afaa43b58\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.011827 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-nova-metadata-neutron-config-0\") pod \"a4be7c16-2599-4533-9efd-256afaa43b58\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.011845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-inventory\") pod \"a4be7c16-2599-4533-9efd-256afaa43b58\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.011860 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ceph\") pod \"a4be7c16-2599-4533-9efd-256afaa43b58\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.011937 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-ovn-metadata-agent-neutron-config-0\") pod \"a4be7c16-2599-4533-9efd-256afaa43b58\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.011969 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgh74\" (UniqueName: \"kubernetes.io/projected/a4be7c16-2599-4533-9efd-256afaa43b58-kube-api-access-wgh74\") pod \"a4be7c16-2599-4533-9efd-256afaa43b58\" (UID: \"a4be7c16-2599-4533-9efd-256afaa43b58\") " Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.017385 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ceph" (OuterVolumeSpecName: "ceph") pod "a4be7c16-2599-4533-9efd-256afaa43b58" (UID: "a4be7c16-2599-4533-9efd-256afaa43b58"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.017681 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a4be7c16-2599-4533-9efd-256afaa43b58" (UID: "a4be7c16-2599-4533-9efd-256afaa43b58"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.018664 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4be7c16-2599-4533-9efd-256afaa43b58-kube-api-access-wgh74" (OuterVolumeSpecName: "kube-api-access-wgh74") pod "a4be7c16-2599-4533-9efd-256afaa43b58" (UID: "a4be7c16-2599-4533-9efd-256afaa43b58"). InnerVolumeSpecName "kube-api-access-wgh74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.038066 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a4be7c16-2599-4533-9efd-256afaa43b58" (UID: "a4be7c16-2599-4533-9efd-256afaa43b58"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.038362 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "a4be7c16-2599-4533-9efd-256afaa43b58" (UID: "a4be7c16-2599-4533-9efd-256afaa43b58"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.040932 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-inventory" (OuterVolumeSpecName: "inventory") pod "a4be7c16-2599-4533-9efd-256afaa43b58" (UID: "a4be7c16-2599-4533-9efd-256afaa43b58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.046893 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "a4be7c16-2599-4533-9efd-256afaa43b58" (UID: "a4be7c16-2599-4533-9efd-256afaa43b58"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.113509 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.113540 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.113551 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.113560 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.113569 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.113578 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a4be7c16-2599-4533-9efd-256afaa43b58-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.113587 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgh74\" (UniqueName: \"kubernetes.io/projected/a4be7c16-2599-4533-9efd-256afaa43b58-kube-api-access-wgh74\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.564247 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" event={"ID":"a4be7c16-2599-4533-9efd-256afaa43b58","Type":"ContainerDied","Data":"66bc698c8da7284cd78521e8bc591443fc9718d0b6cf778d83db0406245b9492"} Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.564294 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.564322 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66bc698c8da7284cd78521e8bc591443fc9718d0b6cf778d83db0406245b9492" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.741435 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7"] Dec 02 11:01:21 crc kubenswrapper[4813]: E1202 11:01:21.742022 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c04c0e4-5a90-4287-bcbe-11e190ec6005" containerName="keystone-cron" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.742052 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c04c0e4-5a90-4287-bcbe-11e190ec6005" containerName="keystone-cron" Dec 02 11:01:21 crc kubenswrapper[4813]: E1202 11:01:21.742113 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4be7c16-2599-4533-9efd-256afaa43b58" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.742130 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4be7c16-2599-4533-9efd-256afaa43b58" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.742412 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4be7c16-2599-4533-9efd-256afaa43b58" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.742462 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c04c0e4-5a90-4287-bcbe-11e190ec6005" containerName="keystone-cron" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.743434 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.746374 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.746453 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.746863 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.747060 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.747130 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.748118 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.753427 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7"] Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.826417 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.826471 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.826503 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.826555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.826584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tjsh\" (UniqueName: \"kubernetes.io/projected/d12b539d-a4ef-4c0d-9770-af7b7543d284-kube-api-access-9tjsh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.826729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.928384 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.928807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.928914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.929026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.929148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tjsh\" (UniqueName: \"kubernetes.io/projected/d12b539d-a4ef-4c0d-9770-af7b7543d284-kube-api-access-9tjsh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.929297 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.933736 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.933784 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.934589 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.934625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.934780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:21 crc kubenswrapper[4813]: I1202 11:01:21.946322 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tjsh\" (UniqueName: \"kubernetes.io/projected/d12b539d-a4ef-4c0d-9770-af7b7543d284-kube-api-access-9tjsh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:22 crc kubenswrapper[4813]: I1202 11:01:22.065699 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:01:22 crc kubenswrapper[4813]: I1202 11:01:22.594223 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7"] Dec 02 11:01:22 crc kubenswrapper[4813]: W1202 11:01:22.594850 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd12b539d_a4ef_4c0d_9770_af7b7543d284.slice/crio-f9e82732149422b285531502f122faeeef86a15eb5d770a511c74d76c95d8f08 WatchSource:0}: Error finding container f9e82732149422b285531502f122faeeef86a15eb5d770a511c74d76c95d8f08: Status 404 returned error can't find the container with id f9e82732149422b285531502f122faeeef86a15eb5d770a511c74d76c95d8f08 Dec 02 11:01:23 crc kubenswrapper[4813]: I1202 11:01:23.584589 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" event={"ID":"d12b539d-a4ef-4c0d-9770-af7b7543d284","Type":"ContainerStarted","Data":"edbe08dae2ea72d9a204cca65ee2703b9acc9ae97d24c2e1903accb13ef12b06"} Dec 02 11:01:23 crc kubenswrapper[4813]: I1202 11:01:23.585433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" event={"ID":"d12b539d-a4ef-4c0d-9770-af7b7543d284","Type":"ContainerStarted","Data":"f9e82732149422b285531502f122faeeef86a15eb5d770a511c74d76c95d8f08"} Dec 02 11:01:23 crc kubenswrapper[4813]: I1202 11:01:23.614671 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" podStartSLOduration=1.900015912 podStartE2EDuration="2.614652748s" podCreationTimestamp="2025-12-02 11:01:21 +0000 UTC" firstStartedPulling="2025-12-02 11:01:22.596780236 +0000 UTC m=+3206.791954538" lastFinishedPulling="2025-12-02 11:01:23.311417072 +0000 UTC m=+3207.506591374" observedRunningTime="2025-12-02 11:01:23.604745417 +0000 UTC m=+3207.799919739" watchObservedRunningTime="2025-12-02 11:01:23.614652748 +0000 UTC m=+3207.809827050" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.137299 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cjzzb"] Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.140591 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.161282 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjzzb"] Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.309741 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-utilities\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.310100 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-catalog-content\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.310388 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkfn\" (UniqueName: \"kubernetes.io/projected/a10488f1-d254-4b27-a6b6-91a662954601-kube-api-access-nhkfn\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.412892 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-utilities\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.413029 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-catalog-content\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.413104 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhkfn\" (UniqueName: \"kubernetes.io/projected/a10488f1-d254-4b27-a6b6-91a662954601-kube-api-access-nhkfn\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.413425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-utilities\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.413456 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-catalog-content\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.434328 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhkfn\" (UniqueName: \"kubernetes.io/projected/a10488f1-d254-4b27-a6b6-91a662954601-kube-api-access-nhkfn\") pod \"community-operators-cjzzb\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.463733 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:36 crc kubenswrapper[4813]: I1202 11:01:36.971863 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjzzb"] Dec 02 11:01:36 crc kubenswrapper[4813]: W1202 11:01:36.976229 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda10488f1_d254_4b27_a6b6_91a662954601.slice/crio-9c74e7390361d4586f5fd50c4cf9a377d219d98b6a9547a405d4461f737d3a8f WatchSource:0}: Error finding container 9c74e7390361d4586f5fd50c4cf9a377d219d98b6a9547a405d4461f737d3a8f: Status 404 returned error can't find the container with id 9c74e7390361d4586f5fd50c4cf9a377d219d98b6a9547a405d4461f737d3a8f Dec 02 11:01:37 crc kubenswrapper[4813]: I1202 11:01:37.712671 4813 generic.go:334] "Generic (PLEG): container finished" podID="a10488f1-d254-4b27-a6b6-91a662954601" containerID="785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d" exitCode=0 Dec 02 11:01:37 crc kubenswrapper[4813]: I1202 11:01:37.712770 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjzzb" event={"ID":"a10488f1-d254-4b27-a6b6-91a662954601","Type":"ContainerDied","Data":"785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d"} Dec 02 11:01:37 crc kubenswrapper[4813]: I1202 11:01:37.713100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjzzb" event={"ID":"a10488f1-d254-4b27-a6b6-91a662954601","Type":"ContainerStarted","Data":"9c74e7390361d4586f5fd50c4cf9a377d219d98b6a9547a405d4461f737d3a8f"} Dec 02 11:01:37 crc kubenswrapper[4813]: I1202 11:01:37.714670 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:01:38 crc kubenswrapper[4813]: I1202 11:01:38.722660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjzzb" event={"ID":"a10488f1-d254-4b27-a6b6-91a662954601","Type":"ContainerStarted","Data":"93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556"} Dec 02 11:01:39 crc kubenswrapper[4813]: I1202 11:01:39.733483 4813 generic.go:334] "Generic (PLEG): container finished" podID="a10488f1-d254-4b27-a6b6-91a662954601" containerID="93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556" exitCode=0 Dec 02 11:01:39 crc kubenswrapper[4813]: I1202 11:01:39.733526 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjzzb" event={"ID":"a10488f1-d254-4b27-a6b6-91a662954601","Type":"ContainerDied","Data":"93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556"} Dec 02 11:01:40 crc kubenswrapper[4813]: I1202 11:01:40.748400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjzzb" event={"ID":"a10488f1-d254-4b27-a6b6-91a662954601","Type":"ContainerStarted","Data":"3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507"} Dec 02 11:01:40 crc kubenswrapper[4813]: I1202 11:01:40.785219 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cjzzb" podStartSLOduration=2.276267311 podStartE2EDuration="4.785193289s" podCreationTimestamp="2025-12-02 11:01:36 +0000 UTC" firstStartedPulling="2025-12-02 11:01:37.714381256 +0000 UTC m=+3221.909555558" lastFinishedPulling="2025-12-02 11:01:40.223307234 +0000 UTC m=+3224.418481536" observedRunningTime="2025-12-02 11:01:40.783242144 +0000 UTC m=+3224.978416456" watchObservedRunningTime="2025-12-02 11:01:40.785193289 +0000 UTC m=+3224.980367611" Dec 02 11:01:46 crc kubenswrapper[4813]: I1202 11:01:46.465013 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:46 crc kubenswrapper[4813]: I1202 11:01:46.465718 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:46 crc kubenswrapper[4813]: I1202 11:01:46.522346 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:46 crc kubenswrapper[4813]: I1202 11:01:46.865482 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:46 crc kubenswrapper[4813]: I1202 11:01:46.933131 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjzzb"] Dec 02 11:01:48 crc kubenswrapper[4813]: I1202 11:01:48.822810 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cjzzb" podUID="a10488f1-d254-4b27-a6b6-91a662954601" containerName="registry-server" containerID="cri-o://3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507" gracePeriod=2 Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.256605 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.370527 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-catalog-content\") pod \"a10488f1-d254-4b27-a6b6-91a662954601\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.370711 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-utilities\") pod \"a10488f1-d254-4b27-a6b6-91a662954601\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.370753 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhkfn\" (UniqueName: \"kubernetes.io/projected/a10488f1-d254-4b27-a6b6-91a662954601-kube-api-access-nhkfn\") pod \"a10488f1-d254-4b27-a6b6-91a662954601\" (UID: \"a10488f1-d254-4b27-a6b6-91a662954601\") " Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.371822 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-utilities" (OuterVolumeSpecName: "utilities") pod "a10488f1-d254-4b27-a6b6-91a662954601" (UID: "a10488f1-d254-4b27-a6b6-91a662954601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.376662 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10488f1-d254-4b27-a6b6-91a662954601-kube-api-access-nhkfn" (OuterVolumeSpecName: "kube-api-access-nhkfn") pod "a10488f1-d254-4b27-a6b6-91a662954601" (UID: "a10488f1-d254-4b27-a6b6-91a662954601"). InnerVolumeSpecName "kube-api-access-nhkfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.422146 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a10488f1-d254-4b27-a6b6-91a662954601" (UID: "a10488f1-d254-4b27-a6b6-91a662954601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.473669 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.473704 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a10488f1-d254-4b27-a6b6-91a662954601-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.473718 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhkfn\" (UniqueName: \"kubernetes.io/projected/a10488f1-d254-4b27-a6b6-91a662954601-kube-api-access-nhkfn\") on node \"crc\" DevicePath \"\"" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.835886 4813 generic.go:334] "Generic (PLEG): container finished" podID="a10488f1-d254-4b27-a6b6-91a662954601" containerID="3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507" exitCode=0 Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.835970 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjzzb" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.835960 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjzzb" event={"ID":"a10488f1-d254-4b27-a6b6-91a662954601","Type":"ContainerDied","Data":"3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507"} Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.836048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjzzb" event={"ID":"a10488f1-d254-4b27-a6b6-91a662954601","Type":"ContainerDied","Data":"9c74e7390361d4586f5fd50c4cf9a377d219d98b6a9547a405d4461f737d3a8f"} Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.836107 4813 scope.go:117] "RemoveContainer" containerID="3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.855785 4813 scope.go:117] "RemoveContainer" containerID="93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.868905 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjzzb"] Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.877160 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cjzzb"] Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.899321 4813 scope.go:117] "RemoveContainer" containerID="785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.923746 4813 scope.go:117] "RemoveContainer" containerID="3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507" Dec 02 11:01:49 crc kubenswrapper[4813]: E1202 11:01:49.924221 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507\": container with ID starting with 3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507 not found: ID does not exist" containerID="3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.924251 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507"} err="failed to get container status \"3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507\": rpc error: code = NotFound desc = could not find container \"3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507\": container with ID starting with 3800654ac21181ea9dcae2e837d88f386f346d73086af219b5bdf5727f8ed507 not found: ID does not exist" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.924272 4813 scope.go:117] "RemoveContainer" containerID="93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556" Dec 02 11:01:49 crc kubenswrapper[4813]: E1202 11:01:49.924590 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556\": container with ID starting with 93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556 not found: ID does not exist" containerID="93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.924662 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556"} err="failed to get container status \"93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556\": rpc error: code = NotFound desc = could not find container \"93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556\": container with ID starting with 93e75b9da52e094929a5db0dbd4a3611934bc4b8d36f74e6c7270439a3879556 not found: ID does not exist" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.924680 4813 scope.go:117] "RemoveContainer" containerID="785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d" Dec 02 11:01:49 crc kubenswrapper[4813]: E1202 11:01:49.924886 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d\": container with ID starting with 785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d not found: ID does not exist" containerID="785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d" Dec 02 11:01:49 crc kubenswrapper[4813]: I1202 11:01:49.924908 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d"} err="failed to get container status \"785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d\": rpc error: code = NotFound desc = could not find container \"785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d\": container with ID starting with 785f6ae4ecfa8ccf36670d9ac8f4cdcd3e5a85abccd3edc093b9eafd7b6b5f9d not found: ID does not exist" Dec 02 11:01:50 crc kubenswrapper[4813]: I1202 11:01:50.079585 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10488f1-d254-4b27-a6b6-91a662954601" path="/var/lib/kubelet/pods/a10488f1-d254-4b27-a6b6-91a662954601/volumes" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.051747 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-722fn"] Dec 02 11:03:00 crc kubenswrapper[4813]: E1202 11:03:00.053308 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10488f1-d254-4b27-a6b6-91a662954601" containerName="extract-utilities" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.053342 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10488f1-d254-4b27-a6b6-91a662954601" containerName="extract-utilities" Dec 02 11:03:00 crc kubenswrapper[4813]: E1202 11:03:00.053375 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10488f1-d254-4b27-a6b6-91a662954601" containerName="extract-content" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.053397 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10488f1-d254-4b27-a6b6-91a662954601" containerName="extract-content" Dec 02 11:03:00 crc kubenswrapper[4813]: E1202 11:03:00.053437 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10488f1-d254-4b27-a6b6-91a662954601" containerName="registry-server" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.053455 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10488f1-d254-4b27-a6b6-91a662954601" containerName="registry-server" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.053931 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10488f1-d254-4b27-a6b6-91a662954601" containerName="registry-server" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.061222 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.082644 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-722fn"] Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.215192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-catalog-content\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.215472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8lf\" (UniqueName: \"kubernetes.io/projected/926391e2-61e2-44e0-aad9-7ba2c4933da5-kube-api-access-zp8lf\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.215556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-utilities\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.316860 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-catalog-content\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.316989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp8lf\" (UniqueName: \"kubernetes.io/projected/926391e2-61e2-44e0-aad9-7ba2c4933da5-kube-api-access-zp8lf\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.317024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-utilities\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.317330 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-catalog-content\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.317375 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-utilities\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.339904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp8lf\" (UniqueName: \"kubernetes.io/projected/926391e2-61e2-44e0-aad9-7ba2c4933da5-kube-api-access-zp8lf\") pod \"redhat-marketplace-722fn\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.388230 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:00 crc kubenswrapper[4813]: I1202 11:03:00.883660 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-722fn"] Dec 02 11:03:00 crc kubenswrapper[4813]: W1202 11:03:00.888889 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod926391e2_61e2_44e0_aad9_7ba2c4933da5.slice/crio-b2a7228ab5bb6fe441fe80c1800457f57493f76f167311e9f7ed7f5cacc21dda WatchSource:0}: Error finding container b2a7228ab5bb6fe441fe80c1800457f57493f76f167311e9f7ed7f5cacc21dda: Status 404 returned error can't find the container with id b2a7228ab5bb6fe441fe80c1800457f57493f76f167311e9f7ed7f5cacc21dda Dec 02 11:03:01 crc kubenswrapper[4813]: I1202 11:03:01.508089 4813 generic.go:334] "Generic (PLEG): container finished" podID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerID="aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52" exitCode=0 Dec 02 11:03:01 crc kubenswrapper[4813]: I1202 11:03:01.508147 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-722fn" event={"ID":"926391e2-61e2-44e0-aad9-7ba2c4933da5","Type":"ContainerDied","Data":"aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52"} Dec 02 11:03:01 crc kubenswrapper[4813]: I1202 11:03:01.508181 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-722fn" event={"ID":"926391e2-61e2-44e0-aad9-7ba2c4933da5","Type":"ContainerStarted","Data":"b2a7228ab5bb6fe441fe80c1800457f57493f76f167311e9f7ed7f5cacc21dda"} Dec 02 11:03:03 crc kubenswrapper[4813]: I1202 11:03:03.524656 4813 generic.go:334] "Generic (PLEG): container finished" podID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerID="7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc" exitCode=0 Dec 02 11:03:03 crc kubenswrapper[4813]: I1202 11:03:03.524941 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-722fn" event={"ID":"926391e2-61e2-44e0-aad9-7ba2c4933da5","Type":"ContainerDied","Data":"7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc"} Dec 02 11:03:04 crc kubenswrapper[4813]: I1202 11:03:04.536230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-722fn" event={"ID":"926391e2-61e2-44e0-aad9-7ba2c4933da5","Type":"ContainerStarted","Data":"a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b"} Dec 02 11:03:04 crc kubenswrapper[4813]: I1202 11:03:04.560176 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-722fn" podStartSLOduration=2.088508506 podStartE2EDuration="4.560156513s" podCreationTimestamp="2025-12-02 11:03:00 +0000 UTC" firstStartedPulling="2025-12-02 11:03:01.515391719 +0000 UTC m=+3305.710566031" lastFinishedPulling="2025-12-02 11:03:03.987039736 +0000 UTC m=+3308.182214038" observedRunningTime="2025-12-02 11:03:04.5547642 +0000 UTC m=+3308.749938512" watchObservedRunningTime="2025-12-02 11:03:04.560156513 +0000 UTC m=+3308.755330815" Dec 02 11:03:10 crc kubenswrapper[4813]: I1202 11:03:10.388331 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:10 crc kubenswrapper[4813]: I1202 11:03:10.389431 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:10 crc kubenswrapper[4813]: I1202 11:03:10.437276 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:10 crc kubenswrapper[4813]: I1202 11:03:10.632482 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:10 crc kubenswrapper[4813]: I1202 11:03:10.680185 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-722fn"] Dec 02 11:03:12 crc kubenswrapper[4813]: I1202 11:03:12.602509 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-722fn" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerName="registry-server" containerID="cri-o://a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b" gracePeriod=2 Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.090982 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.254054 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-catalog-content\") pod \"926391e2-61e2-44e0-aad9-7ba2c4933da5\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.254224 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp8lf\" (UniqueName: \"kubernetes.io/projected/926391e2-61e2-44e0-aad9-7ba2c4933da5-kube-api-access-zp8lf\") pod \"926391e2-61e2-44e0-aad9-7ba2c4933da5\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.254266 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-utilities\") pod \"926391e2-61e2-44e0-aad9-7ba2c4933da5\" (UID: \"926391e2-61e2-44e0-aad9-7ba2c4933da5\") " Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.255064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-utilities" (OuterVolumeSpecName: "utilities") pod "926391e2-61e2-44e0-aad9-7ba2c4933da5" (UID: "926391e2-61e2-44e0-aad9-7ba2c4933da5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.260593 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926391e2-61e2-44e0-aad9-7ba2c4933da5-kube-api-access-zp8lf" (OuterVolumeSpecName: "kube-api-access-zp8lf") pod "926391e2-61e2-44e0-aad9-7ba2c4933da5" (UID: "926391e2-61e2-44e0-aad9-7ba2c4933da5"). InnerVolumeSpecName "kube-api-access-zp8lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.274015 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "926391e2-61e2-44e0-aad9-7ba2c4933da5" (UID: "926391e2-61e2-44e0-aad9-7ba2c4933da5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.356729 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp8lf\" (UniqueName: \"kubernetes.io/projected/926391e2-61e2-44e0-aad9-7ba2c4933da5-kube-api-access-zp8lf\") on node \"crc\" DevicePath \"\"" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.356772 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.356785 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926391e2-61e2-44e0-aad9-7ba2c4933da5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.614605 4813 generic.go:334] "Generic (PLEG): container finished" podID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerID="a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b" exitCode=0 Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.614664 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-722fn" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.614676 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-722fn" event={"ID":"926391e2-61e2-44e0-aad9-7ba2c4933da5","Type":"ContainerDied","Data":"a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b"} Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.614733 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-722fn" event={"ID":"926391e2-61e2-44e0-aad9-7ba2c4933da5","Type":"ContainerDied","Data":"b2a7228ab5bb6fe441fe80c1800457f57493f76f167311e9f7ed7f5cacc21dda"} Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.614766 4813 scope.go:117] "RemoveContainer" containerID="a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.643108 4813 scope.go:117] "RemoveContainer" containerID="7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.655170 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-722fn"] Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.665828 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-722fn"] Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.683271 4813 scope.go:117] "RemoveContainer" containerID="aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.724633 4813 scope.go:117] "RemoveContainer" containerID="a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b" Dec 02 11:03:13 crc kubenswrapper[4813]: E1202 11:03:13.725299 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b\": container with ID starting with a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b not found: ID does not exist" containerID="a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.725349 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b"} err="failed to get container status \"a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b\": rpc error: code = NotFound desc = could not find container \"a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b\": container with ID starting with a466f70bbf47a95fa95cb94dd2d1a11cb0745784310b716023858d56085d786b not found: ID does not exist" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.725386 4813 scope.go:117] "RemoveContainer" containerID="7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc" Dec 02 11:03:13 crc kubenswrapper[4813]: E1202 11:03:13.725699 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc\": container with ID starting with 7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc not found: ID does not exist" containerID="7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.725723 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc"} err="failed to get container status \"7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc\": rpc error: code = NotFound desc = could not find container \"7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc\": container with ID starting with 7eca1fa5bec291357e0978f324a15f6b4aa8191163e0e2efd49e0c2489a7d8dc not found: ID does not exist" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.725745 4813 scope.go:117] "RemoveContainer" containerID="aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52" Dec 02 11:03:13 crc kubenswrapper[4813]: E1202 11:03:13.725972 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52\": container with ID starting with aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52 not found: ID does not exist" containerID="aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52" Dec 02 11:03:13 crc kubenswrapper[4813]: I1202 11:03:13.726005 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52"} err="failed to get container status \"aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52\": rpc error: code = NotFound desc = could not find container \"aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52\": container with ID starting with aa5683824095bd5fb8c9542c6b70a84d119d88e5488d74790662a35f66b22e52 not found: ID does not exist" Dec 02 11:03:14 crc kubenswrapper[4813]: I1202 11:03:14.084064 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" path="/var/lib/kubelet/pods/926391e2-61e2-44e0-aad9-7ba2c4933da5/volumes" Dec 02 11:03:34 crc kubenswrapper[4813]: I1202 11:03:34.273909 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:03:34 crc kubenswrapper[4813]: I1202 11:03:34.274924 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:04:04 crc kubenswrapper[4813]: I1202 11:04:04.274013 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:04:04 crc kubenswrapper[4813]: I1202 11:04:04.274781 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:04:34 crc kubenswrapper[4813]: I1202 11:04:34.274391 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:04:34 crc kubenswrapper[4813]: I1202 11:04:34.274795 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:04:34 crc kubenswrapper[4813]: I1202 11:04:34.274837 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 11:04:34 crc kubenswrapper[4813]: I1202 11:04:34.275507 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c3e81ef0544582e3698354dd375931f076f345ee7fd7985e93a5f7436257ada"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:04:34 crc kubenswrapper[4813]: I1202 11:04:34.275569 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://4c3e81ef0544582e3698354dd375931f076f345ee7fd7985e93a5f7436257ada" gracePeriod=600 Dec 02 11:04:35 crc kubenswrapper[4813]: I1202 11:04:35.373677 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="4c3e81ef0544582e3698354dd375931f076f345ee7fd7985e93a5f7436257ada" exitCode=0 Dec 02 11:04:35 crc kubenswrapper[4813]: I1202 11:04:35.373745 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"4c3e81ef0544582e3698354dd375931f076f345ee7fd7985e93a5f7436257ada"} Dec 02 11:04:35 crc kubenswrapper[4813]: I1202 11:04:35.374277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46"} Dec 02 11:04:35 crc kubenswrapper[4813]: I1202 11:04:35.374297 4813 scope.go:117] "RemoveContainer" containerID="219ce5fe1ccbb2e646bf574e68a3d74bcb7e7108b08a339dd7574d97341bce6a" Dec 02 11:06:05 crc kubenswrapper[4813]: I1202 11:06:05.193421 4813 generic.go:334] "Generic (PLEG): container finished" podID="d12b539d-a4ef-4c0d-9770-af7b7543d284" containerID="edbe08dae2ea72d9a204cca65ee2703b9acc9ae97d24c2e1903accb13ef12b06" exitCode=0 Dec 02 11:06:05 crc kubenswrapper[4813]: I1202 11:06:05.193557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" event={"ID":"d12b539d-a4ef-4c0d-9770-af7b7543d284","Type":"ContainerDied","Data":"edbe08dae2ea72d9a204cca65ee2703b9acc9ae97d24c2e1903accb13ef12b06"} Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.628907 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.740326 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-secret-0\") pod \"d12b539d-a4ef-4c0d-9770-af7b7543d284\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.740590 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ssh-key\") pod \"d12b539d-a4ef-4c0d-9770-af7b7543d284\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.740651 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ceph\") pod \"d12b539d-a4ef-4c0d-9770-af7b7543d284\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.740667 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-inventory\") pod \"d12b539d-a4ef-4c0d-9770-af7b7543d284\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.740726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-combined-ca-bundle\") pod \"d12b539d-a4ef-4c0d-9770-af7b7543d284\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.740746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tjsh\" (UniqueName: \"kubernetes.io/projected/d12b539d-a4ef-4c0d-9770-af7b7543d284-kube-api-access-9tjsh\") pod \"d12b539d-a4ef-4c0d-9770-af7b7543d284\" (UID: \"d12b539d-a4ef-4c0d-9770-af7b7543d284\") " Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.746738 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d12b539d-a4ef-4c0d-9770-af7b7543d284" (UID: "d12b539d-a4ef-4c0d-9770-af7b7543d284"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.746851 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12b539d-a4ef-4c0d-9770-af7b7543d284-kube-api-access-9tjsh" (OuterVolumeSpecName: "kube-api-access-9tjsh") pod "d12b539d-a4ef-4c0d-9770-af7b7543d284" (UID: "d12b539d-a4ef-4c0d-9770-af7b7543d284"). InnerVolumeSpecName "kube-api-access-9tjsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.747725 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ceph" (OuterVolumeSpecName: "ceph") pod "d12b539d-a4ef-4c0d-9770-af7b7543d284" (UID: "d12b539d-a4ef-4c0d-9770-af7b7543d284"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.767297 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d12b539d-a4ef-4c0d-9770-af7b7543d284" (UID: "d12b539d-a4ef-4c0d-9770-af7b7543d284"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.771492 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d12b539d-a4ef-4c0d-9770-af7b7543d284" (UID: "d12b539d-a4ef-4c0d-9770-af7b7543d284"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.784742 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-inventory" (OuterVolumeSpecName: "inventory") pod "d12b539d-a4ef-4c0d-9770-af7b7543d284" (UID: "d12b539d-a4ef-4c0d-9770-af7b7543d284"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.843328 4813 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.843368 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.843382 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.843393 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.843405 4813 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12b539d-a4ef-4c0d-9770-af7b7543d284-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:06:06 crc kubenswrapper[4813]: I1202 11:06:06.843420 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tjsh\" (UniqueName: \"kubernetes.io/projected/d12b539d-a4ef-4c0d-9770-af7b7543d284-kube-api-access-9tjsh\") on node \"crc\" DevicePath \"\"" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.214746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" event={"ID":"d12b539d-a4ef-4c0d-9770-af7b7543d284","Type":"ContainerDied","Data":"f9e82732149422b285531502f122faeeef86a15eb5d770a511c74d76c95d8f08"} Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.214786 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e82732149422b285531502f122faeeef86a15eb5d770a511c74d76c95d8f08" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.214834 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.332721 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2"] Dec 02 11:06:07 crc kubenswrapper[4813]: E1202 11:06:07.333158 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerName="registry-server" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.333182 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerName="registry-server" Dec 02 11:06:07 crc kubenswrapper[4813]: E1202 11:06:07.333211 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerName="extract-content" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.333220 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerName="extract-content" Dec 02 11:06:07 crc kubenswrapper[4813]: E1202 11:06:07.333245 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12b539d-a4ef-4c0d-9770-af7b7543d284" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.333255 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12b539d-a4ef-4c0d-9770-af7b7543d284" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 11:06:07 crc kubenswrapper[4813]: E1202 11:06:07.333272 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerName="extract-utilities" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.333280 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerName="extract-utilities" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.333491 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12b539d-a4ef-4c0d-9770-af7b7543d284" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.333506 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="926391e2-61e2-44e0-aad9-7ba2c4933da5" containerName="registry-server" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.334924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.337641 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.337975 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.338742 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s6rdk" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.339203 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.339483 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.339490 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.339498 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.339696 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.342016 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.343439 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2"] Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.455601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.455918 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456198 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456296 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456491 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456563 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456669 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456739 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456856 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.456985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnsv\" (UniqueName: \"kubernetes.io/projected/09685150-e1df-4f9e-9780-b44084b88a32-kube-api-access-2bnsv\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558425 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558553 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558593 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558642 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558683 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bnsv\" (UniqueName: \"kubernetes.io/projected/09685150-e1df-4f9e-9780-b44084b88a32-kube-api-access-2bnsv\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558721 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.558739 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.559580 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.560262 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.562545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.563167 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.563230 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.563385 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.564431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.564481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.564549 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.566372 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.587178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bnsv\" (UniqueName: \"kubernetes.io/projected/09685150-e1df-4f9e-9780-b44084b88a32-kube-api-access-2bnsv\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:07 crc kubenswrapper[4813]: I1202 11:06:07.652306 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:06:08 crc kubenswrapper[4813]: I1202 11:06:08.160106 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2"] Dec 02 11:06:08 crc kubenswrapper[4813]: I1202 11:06:08.223554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" event={"ID":"09685150-e1df-4f9e-9780-b44084b88a32","Type":"ContainerStarted","Data":"be55450aeedfc40eba75e75f6028570864e6dbb12080a0e9de842e9c704e7872"} Dec 02 11:06:09 crc kubenswrapper[4813]: I1202 11:06:09.235491 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" event={"ID":"09685150-e1df-4f9e-9780-b44084b88a32","Type":"ContainerStarted","Data":"160498fd403d9a36f7883a8779f60842fb0a1b89d9f165b56d589bcbddfc50ca"} Dec 02 11:06:09 crc kubenswrapper[4813]: I1202 11:06:09.258238 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" podStartSLOduration=1.7959363430000002 podStartE2EDuration="2.258220452s" podCreationTimestamp="2025-12-02 11:06:07 +0000 UTC" firstStartedPulling="2025-12-02 11:06:08.16319219 +0000 UTC m=+3492.358366492" lastFinishedPulling="2025-12-02 11:06:08.625476279 +0000 UTC m=+3492.820650601" observedRunningTime="2025-12-02 11:06:09.251359267 +0000 UTC m=+3493.446533579" watchObservedRunningTime="2025-12-02 11:06:09.258220452 +0000 UTC m=+3493.453394754" Dec 02 11:06:34 crc kubenswrapper[4813]: I1202 11:06:34.273401 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:06:34 crc kubenswrapper[4813]: I1202 11:06:34.273864 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:07:04 crc kubenswrapper[4813]: I1202 11:07:04.273520 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:07:04 crc kubenswrapper[4813]: I1202 11:07:04.274133 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:07:34 crc kubenswrapper[4813]: I1202 11:07:34.273852 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:07:34 crc kubenswrapper[4813]: I1202 11:07:34.274453 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:07:34 crc kubenswrapper[4813]: I1202 11:07:34.274502 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 11:07:34 crc kubenswrapper[4813]: I1202 11:07:34.275246 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:07:34 crc kubenswrapper[4813]: I1202 11:07:34.275305 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" gracePeriod=600 Dec 02 11:07:34 crc kubenswrapper[4813]: E1202 11:07:34.400984 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:07:35 crc kubenswrapper[4813]: I1202 11:07:35.101413 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" exitCode=0 Dec 02 11:07:35 crc kubenswrapper[4813]: I1202 11:07:35.101461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46"} Dec 02 11:07:35 crc kubenswrapper[4813]: I1202 11:07:35.101492 4813 scope.go:117] "RemoveContainer" containerID="4c3e81ef0544582e3698354dd375931f076f345ee7fd7985e93a5f7436257ada" Dec 02 11:07:35 crc kubenswrapper[4813]: I1202 11:07:35.102469 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:07:35 crc kubenswrapper[4813]: E1202 11:07:35.102715 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:07:50 crc kubenswrapper[4813]: I1202 11:07:50.068479 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:07:50 crc kubenswrapper[4813]: E1202 11:07:50.069473 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:07:59 crc kubenswrapper[4813]: I1202 11:07:59.916552 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6t5sj"] Dec 02 11:07:59 crc kubenswrapper[4813]: I1202 11:07:59.919119 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:07:59 crc kubenswrapper[4813]: I1202 11:07:59.925895 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t5sj"] Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.049573 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-catalog-content\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.049620 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-utilities\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.049638 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdm4\" (UniqueName: \"kubernetes.io/projected/74174e97-1e7b-48a4-8c35-21055ebdd6e1-kube-api-access-htdm4\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.151022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-catalog-content\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.151087 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-utilities\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.151103 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdm4\" (UniqueName: \"kubernetes.io/projected/74174e97-1e7b-48a4-8c35-21055ebdd6e1-kube-api-access-htdm4\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.151672 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-catalog-content\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.151720 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-utilities\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.171327 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdm4\" (UniqueName: \"kubernetes.io/projected/74174e97-1e7b-48a4-8c35-21055ebdd6e1-kube-api-access-htdm4\") pod \"redhat-operators-6t5sj\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.247854 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:00 crc kubenswrapper[4813]: I1202 11:08:00.724791 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t5sj"] Dec 02 11:08:01 crc kubenswrapper[4813]: I1202 11:08:01.068336 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:08:01 crc kubenswrapper[4813]: E1202 11:08:01.068583 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:08:01 crc kubenswrapper[4813]: I1202 11:08:01.354343 4813 generic.go:334] "Generic (PLEG): container finished" podID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerID="3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab" exitCode=0 Dec 02 11:08:01 crc kubenswrapper[4813]: I1202 11:08:01.354452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t5sj" event={"ID":"74174e97-1e7b-48a4-8c35-21055ebdd6e1","Type":"ContainerDied","Data":"3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab"} Dec 02 11:08:01 crc kubenswrapper[4813]: I1202 11:08:01.354665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t5sj" event={"ID":"74174e97-1e7b-48a4-8c35-21055ebdd6e1","Type":"ContainerStarted","Data":"435d99b18a2352ddcd166553afc27014d7677a1afd9453edeee0f429a41f62e0"} Dec 02 11:08:01 crc kubenswrapper[4813]: I1202 11:08:01.356685 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:08:03 crc kubenswrapper[4813]: I1202 11:08:03.378796 4813 generic.go:334] "Generic (PLEG): container finished" podID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerID="a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795" exitCode=0 Dec 02 11:08:03 crc kubenswrapper[4813]: I1202 11:08:03.378877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t5sj" event={"ID":"74174e97-1e7b-48a4-8c35-21055ebdd6e1","Type":"ContainerDied","Data":"a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795"} Dec 02 11:08:04 crc kubenswrapper[4813]: I1202 11:08:04.392374 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t5sj" event={"ID":"74174e97-1e7b-48a4-8c35-21055ebdd6e1","Type":"ContainerStarted","Data":"126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a"} Dec 02 11:08:04 crc kubenswrapper[4813]: I1202 11:08:04.423866 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6t5sj" podStartSLOduration=2.919172149 podStartE2EDuration="5.423839122s" podCreationTimestamp="2025-12-02 11:07:59 +0000 UTC" firstStartedPulling="2025-12-02 11:08:01.356388417 +0000 UTC m=+3605.551562719" lastFinishedPulling="2025-12-02 11:08:03.86105539 +0000 UTC m=+3608.056229692" observedRunningTime="2025-12-02 11:08:04.415186157 +0000 UTC m=+3608.610360459" watchObservedRunningTime="2025-12-02 11:08:04.423839122 +0000 UTC m=+3608.619013424" Dec 02 11:08:10 crc kubenswrapper[4813]: I1202 11:08:10.248475 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:10 crc kubenswrapper[4813]: I1202 11:08:10.250490 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:10 crc kubenswrapper[4813]: I1202 11:08:10.301939 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:10 crc kubenswrapper[4813]: I1202 11:08:10.490051 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:10 crc kubenswrapper[4813]: I1202 11:08:10.555751 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t5sj"] Dec 02 11:08:12 crc kubenswrapper[4813]: I1202 11:08:12.068303 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:08:12 crc kubenswrapper[4813]: E1202 11:08:12.068857 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:08:12 crc kubenswrapper[4813]: I1202 11:08:12.463837 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6t5sj" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerName="registry-server" containerID="cri-o://126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a" gracePeriod=2 Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.398138 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.478303 4813 generic.go:334] "Generic (PLEG): container finished" podID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerID="126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a" exitCode=0 Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.478355 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t5sj" event={"ID":"74174e97-1e7b-48a4-8c35-21055ebdd6e1","Type":"ContainerDied","Data":"126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a"} Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.478382 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t5sj" event={"ID":"74174e97-1e7b-48a4-8c35-21055ebdd6e1","Type":"ContainerDied","Data":"435d99b18a2352ddcd166553afc27014d7677a1afd9453edeee0f429a41f62e0"} Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.478406 4813 scope.go:117] "RemoveContainer" containerID="126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.478440 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t5sj" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.498821 4813 scope.go:117] "RemoveContainer" containerID="a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.515252 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-catalog-content\") pod \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.515426 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-utilities\") pod \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.515504 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdm4\" (UniqueName: \"kubernetes.io/projected/74174e97-1e7b-48a4-8c35-21055ebdd6e1-kube-api-access-htdm4\") pod \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\" (UID: \"74174e97-1e7b-48a4-8c35-21055ebdd6e1\") " Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.517219 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-utilities" (OuterVolumeSpecName: "utilities") pod "74174e97-1e7b-48a4-8c35-21055ebdd6e1" (UID: "74174e97-1e7b-48a4-8c35-21055ebdd6e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.524143 4813 scope.go:117] "RemoveContainer" containerID="3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.531296 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74174e97-1e7b-48a4-8c35-21055ebdd6e1-kube-api-access-htdm4" (OuterVolumeSpecName: "kube-api-access-htdm4") pod "74174e97-1e7b-48a4-8c35-21055ebdd6e1" (UID: "74174e97-1e7b-48a4-8c35-21055ebdd6e1"). InnerVolumeSpecName "kube-api-access-htdm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.598241 4813 scope.go:117] "RemoveContainer" containerID="126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a" Dec 02 11:08:13 crc kubenswrapper[4813]: E1202 11:08:13.598615 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a\": container with ID starting with 126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a not found: ID does not exist" containerID="126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.598654 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a"} err="failed to get container status \"126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a\": rpc error: code = NotFound desc = could not find container \"126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a\": container with ID starting with 126f64a902ba834534a1674836ded31ca030b2dea10b035b9080afc491d5ed7a not found: ID does not exist" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.598678 4813 scope.go:117] "RemoveContainer" containerID="a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795" Dec 02 11:08:13 crc kubenswrapper[4813]: E1202 11:08:13.598917 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795\": container with ID starting with a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795 not found: ID does not exist" containerID="a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.598946 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795"} err="failed to get container status \"a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795\": rpc error: code = NotFound desc = could not find container \"a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795\": container with ID starting with a7144105cdd8682b0f8ef129c2703b38b51d0451c056d5ad0024d0999c1e4795 not found: ID does not exist" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.598963 4813 scope.go:117] "RemoveContainer" containerID="3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab" Dec 02 11:08:13 crc kubenswrapper[4813]: E1202 11:08:13.599206 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab\": container with ID starting with 3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab not found: ID does not exist" containerID="3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.599230 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab"} err="failed to get container status \"3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab\": rpc error: code = NotFound desc = could not find container \"3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab\": container with ID starting with 3a9121c4c258f2523431c1828cf78ef593a26e8572c62cf1ff7484d16d3aa8ab not found: ID does not exist" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.618272 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:08:13 crc kubenswrapper[4813]: I1202 11:08:13.618331 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdm4\" (UniqueName: \"kubernetes.io/projected/74174e97-1e7b-48a4-8c35-21055ebdd6e1-kube-api-access-htdm4\") on node \"crc\" DevicePath \"\"" Dec 02 11:08:15 crc kubenswrapper[4813]: I1202 11:08:15.399869 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74174e97-1e7b-48a4-8c35-21055ebdd6e1" (UID: "74174e97-1e7b-48a4-8c35-21055ebdd6e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:08:15 crc kubenswrapper[4813]: I1202 11:08:15.452773 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74174e97-1e7b-48a4-8c35-21055ebdd6e1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:08:15 crc kubenswrapper[4813]: I1202 11:08:15.614502 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t5sj"] Dec 02 11:08:15 crc kubenswrapper[4813]: I1202 11:08:15.622385 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6t5sj"] Dec 02 11:08:16 crc kubenswrapper[4813]: I1202 11:08:16.077515 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" path="/var/lib/kubelet/pods/74174e97-1e7b-48a4-8c35-21055ebdd6e1/volumes" Dec 02 11:08:26 crc kubenswrapper[4813]: I1202 11:08:26.075888 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:08:26 crc kubenswrapper[4813]: E1202 11:08:26.077283 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.397993 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tx66w"] Dec 02 11:08:27 crc kubenswrapper[4813]: E1202 11:08:27.398696 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerName="extract-content" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.398708 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerName="extract-content" Dec 02 11:08:27 crc kubenswrapper[4813]: E1202 11:08:27.398723 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerName="extract-utilities" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.398729 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerName="extract-utilities" Dec 02 11:08:27 crc kubenswrapper[4813]: E1202 11:08:27.398743 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerName="registry-server" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.398749 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerName="registry-server" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.398907 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="74174e97-1e7b-48a4-8c35-21055ebdd6e1" containerName="registry-server" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.402450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.423572 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tx66w"] Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.484006 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkhms\" (UniqueName: \"kubernetes.io/projected/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-kube-api-access-qkhms\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.484276 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-utilities\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.484453 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-catalog-content\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.586768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkhms\" (UniqueName: \"kubernetes.io/projected/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-kube-api-access-qkhms\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.586862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-utilities\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.586975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-catalog-content\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.587544 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-catalog-content\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.587660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-utilities\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.606538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkhms\" (UniqueName: \"kubernetes.io/projected/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-kube-api-access-qkhms\") pod \"certified-operators-tx66w\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:27 crc kubenswrapper[4813]: I1202 11:08:27.724622 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:28 crc kubenswrapper[4813]: I1202 11:08:28.196546 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tx66w"] Dec 02 11:08:28 crc kubenswrapper[4813]: I1202 11:08:28.613910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx66w" event={"ID":"5e3874b8-fdd0-4bc4-8261-91164db0a0c4","Type":"ContainerStarted","Data":"b5954e2e6d4055d1c9eb77d9891f796dfe068ac2e443e51d22f5c89defbfe088"} Dec 02 11:08:29 crc kubenswrapper[4813]: I1202 11:08:29.629478 4813 generic.go:334] "Generic (PLEG): container finished" podID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerID="9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7" exitCode=0 Dec 02 11:08:29 crc kubenswrapper[4813]: I1202 11:08:29.629920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx66w" event={"ID":"5e3874b8-fdd0-4bc4-8261-91164db0a0c4","Type":"ContainerDied","Data":"9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7"} Dec 02 11:08:31 crc kubenswrapper[4813]: I1202 11:08:31.655054 4813 generic.go:334] "Generic (PLEG): container finished" podID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerID="d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546" exitCode=0 Dec 02 11:08:31 crc kubenswrapper[4813]: I1202 11:08:31.655188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx66w" event={"ID":"5e3874b8-fdd0-4bc4-8261-91164db0a0c4","Type":"ContainerDied","Data":"d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546"} Dec 02 11:08:32 crc kubenswrapper[4813]: I1202 11:08:32.667156 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx66w" event={"ID":"5e3874b8-fdd0-4bc4-8261-91164db0a0c4","Type":"ContainerStarted","Data":"08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a"} Dec 02 11:08:32 crc kubenswrapper[4813]: I1202 11:08:32.690908 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tx66w" podStartSLOduration=3.126748478 podStartE2EDuration="5.690888207s" podCreationTimestamp="2025-12-02 11:08:27 +0000 UTC" firstStartedPulling="2025-12-02 11:08:29.635431864 +0000 UTC m=+3633.830606206" lastFinishedPulling="2025-12-02 11:08:32.199571633 +0000 UTC m=+3636.394745935" observedRunningTime="2025-12-02 11:08:32.686570804 +0000 UTC m=+3636.881745116" watchObservedRunningTime="2025-12-02 11:08:32.690888207 +0000 UTC m=+3636.886062509" Dec 02 11:08:37 crc kubenswrapper[4813]: I1202 11:08:37.725794 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:37 crc kubenswrapper[4813]: I1202 11:08:37.726360 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:37 crc kubenswrapper[4813]: I1202 11:08:37.797960 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:38 crc kubenswrapper[4813]: I1202 11:08:38.775116 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:38 crc kubenswrapper[4813]: I1202 11:08:38.831054 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tx66w"] Dec 02 11:08:40 crc kubenswrapper[4813]: I1202 11:08:40.068659 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:08:40 crc kubenswrapper[4813]: E1202 11:08:40.068904 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:08:40 crc kubenswrapper[4813]: I1202 11:08:40.752318 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tx66w" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerName="registry-server" containerID="cri-o://08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a" gracePeriod=2 Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.193113 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.328276 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkhms\" (UniqueName: \"kubernetes.io/projected/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-kube-api-access-qkhms\") pod \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.328375 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-catalog-content\") pod \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.328401 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-utilities\") pod \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\" (UID: \"5e3874b8-fdd0-4bc4-8261-91164db0a0c4\") " Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.329486 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-utilities" (OuterVolumeSpecName: "utilities") pod "5e3874b8-fdd0-4bc4-8261-91164db0a0c4" (UID: "5e3874b8-fdd0-4bc4-8261-91164db0a0c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.334010 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-kube-api-access-qkhms" (OuterVolumeSpecName: "kube-api-access-qkhms") pod "5e3874b8-fdd0-4bc4-8261-91164db0a0c4" (UID: "5e3874b8-fdd0-4bc4-8261-91164db0a0c4"). InnerVolumeSpecName "kube-api-access-qkhms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.431600 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkhms\" (UniqueName: \"kubernetes.io/projected/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-kube-api-access-qkhms\") on node \"crc\" DevicePath \"\"" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.431662 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.644635 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e3874b8-fdd0-4bc4-8261-91164db0a0c4" (UID: "5e3874b8-fdd0-4bc4-8261-91164db0a0c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.737387 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e3874b8-fdd0-4bc4-8261-91164db0a0c4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.762651 4813 generic.go:334] "Generic (PLEG): container finished" podID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerID="08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a" exitCode=0 Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.762713 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx66w" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.762742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx66w" event={"ID":"5e3874b8-fdd0-4bc4-8261-91164db0a0c4","Type":"ContainerDied","Data":"08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a"} Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.762800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx66w" event={"ID":"5e3874b8-fdd0-4bc4-8261-91164db0a0c4","Type":"ContainerDied","Data":"b5954e2e6d4055d1c9eb77d9891f796dfe068ac2e443e51d22f5c89defbfe088"} Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.762823 4813 scope.go:117] "RemoveContainer" containerID="08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.797464 4813 scope.go:117] "RemoveContainer" containerID="d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.801050 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tx66w"] Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.818751 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tx66w"] Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.827530 4813 scope.go:117] "RemoveContainer" containerID="9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.864743 4813 scope.go:117] "RemoveContainer" containerID="08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a" Dec 02 11:08:41 crc kubenswrapper[4813]: E1202 11:08:41.865176 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a\": container with ID starting with 08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a not found: ID does not exist" containerID="08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.865214 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a"} err="failed to get container status \"08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a\": rpc error: code = NotFound desc = could not find container \"08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a\": container with ID starting with 08bec1945a5baee5d3c9521e1cd0fb2447870a857cb7cc48bee1bb9439aed11a not found: ID does not exist" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.865239 4813 scope.go:117] "RemoveContainer" containerID="d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546" Dec 02 11:08:41 crc kubenswrapper[4813]: E1202 11:08:41.865579 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546\": container with ID starting with d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546 not found: ID does not exist" containerID="d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.865628 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546"} err="failed to get container status \"d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546\": rpc error: code = NotFound desc = could not find container \"d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546\": container with ID starting with d0d1a352944aff0af9fb78809274753ccdfdec2558189a33030732ed299c9546 not found: ID does not exist" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.865656 4813 scope.go:117] "RemoveContainer" containerID="9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7" Dec 02 11:08:41 crc kubenswrapper[4813]: E1202 11:08:41.866015 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7\": container with ID starting with 9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7 not found: ID does not exist" containerID="9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7" Dec 02 11:08:41 crc kubenswrapper[4813]: I1202 11:08:41.866051 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7"} err="failed to get container status \"9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7\": rpc error: code = NotFound desc = could not find container \"9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7\": container with ID starting with 9f8c3967a7e342e4d1e4342963053214b649b4585816e072e8ec98f7bc74acb7 not found: ID does not exist" Dec 02 11:08:42 crc kubenswrapper[4813]: I1202 11:08:42.087333 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" path="/var/lib/kubelet/pods/5e3874b8-fdd0-4bc4-8261-91164db0a0c4/volumes" Dec 02 11:08:55 crc kubenswrapper[4813]: I1202 11:08:55.068758 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:08:55 crc kubenswrapper[4813]: E1202 11:08:55.071370 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:09:07 crc kubenswrapper[4813]: I1202 11:09:07.068449 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:09:07 crc kubenswrapper[4813]: E1202 11:09:07.069626 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:09:22 crc kubenswrapper[4813]: I1202 11:09:22.067395 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:09:22 crc kubenswrapper[4813]: E1202 11:09:22.068104 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:09:28 crc kubenswrapper[4813]: I1202 11:09:28.185512 4813 generic.go:334] "Generic (PLEG): container finished" podID="09685150-e1df-4f9e-9780-b44084b88a32" containerID="160498fd403d9a36f7883a8779f60842fb0a1b89d9f165b56d589bcbddfc50ca" exitCode=0 Dec 02 11:09:28 crc kubenswrapper[4813]: I1202 11:09:28.185597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" event={"ID":"09685150-e1df-4f9e-9780-b44084b88a32","Type":"ContainerDied","Data":"160498fd403d9a36f7883a8779f60842fb0a1b89d9f165b56d589bcbddfc50ca"} Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.571858 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.669455 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ssh-key\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.669864 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ceph\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.669938 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-1\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.669980 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-0\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.670012 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-nova-extra-config-0\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.670059 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-1\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.670141 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-0\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.670239 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-inventory\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.670274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-custom-ceph-combined-ca-bundle\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.670316 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-ceph-nova-0\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.670371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bnsv\" (UniqueName: \"kubernetes.io/projected/09685150-e1df-4f9e-9780-b44084b88a32-kube-api-access-2bnsv\") pod \"09685150-e1df-4f9e-9780-b44084b88a32\" (UID: \"09685150-e1df-4f9e-9780-b44084b88a32\") " Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.676728 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ceph" (OuterVolumeSpecName: "ceph") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.676994 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09685150-e1df-4f9e-9780-b44084b88a32-kube-api-access-2bnsv" (OuterVolumeSpecName: "kube-api-access-2bnsv") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "kube-api-access-2bnsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.677861 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.698979 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.699966 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.700023 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.700845 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.702038 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.703260 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.706818 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.712791 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-inventory" (OuterVolumeSpecName: "inventory") pod "09685150-e1df-4f9e-9780-b44084b88a32" (UID: "09685150-e1df-4f9e-9780-b44084b88a32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772835 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772879 4813 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772895 4813 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772910 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bnsv\" (UniqueName: \"kubernetes.io/projected/09685150-e1df-4f9e-9780-b44084b88a32-kube-api-access-2bnsv\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772924 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772936 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772956 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772967 4813 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772978 4813 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/09685150-e1df-4f9e-9780-b44084b88a32-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772988 4813 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:29 crc kubenswrapper[4813]: I1202 11:09:29.772999 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/09685150-e1df-4f9e-9780-b44084b88a32-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:30 crc kubenswrapper[4813]: I1202 11:09:30.205085 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" event={"ID":"09685150-e1df-4f9e-9780-b44084b88a32","Type":"ContainerDied","Data":"be55450aeedfc40eba75e75f6028570864e6dbb12080a0e9de842e9c704e7872"} Dec 02 11:09:30 crc kubenswrapper[4813]: I1202 11:09:30.205137 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be55450aeedfc40eba75e75f6028570864e6dbb12080a0e9de842e9c704e7872" Dec 02 11:09:30 crc kubenswrapper[4813]: I1202 11:09:30.205152 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2" Dec 02 11:09:35 crc kubenswrapper[4813]: I1202 11:09:35.067948 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:09:35 crc kubenswrapper[4813]: E1202 11:09:35.068805 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.833961 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Dec 02 11:09:43 crc kubenswrapper[4813]: E1202 11:09:43.834778 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09685150-e1df-4f9e-9780-b44084b88a32" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.834857 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="09685150-e1df-4f9e-9780-b44084b88a32" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 02 11:09:43 crc kubenswrapper[4813]: E1202 11:09:43.834881 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerName="extract-utilities" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.834888 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerName="extract-utilities" Dec 02 11:09:43 crc kubenswrapper[4813]: E1202 11:09:43.834908 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerName="extract-content" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.834914 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerName="extract-content" Dec 02 11:09:43 crc kubenswrapper[4813]: E1202 11:09:43.834924 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerName="registry-server" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.834929 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerName="registry-server" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.835098 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e3874b8-fdd0-4bc4-8261-91164db0a0c4" containerName="registry-server" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.835124 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="09685150-e1df-4f9e-9780-b44084b88a32" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.836065 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.838100 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.838149 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.849835 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.922055 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.925266 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.929559 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.932766 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.932836 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-sys\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.932885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-dev\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.932911 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.932930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-run\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933023 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933092 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-scripts\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933118 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933153 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-lib-modules\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933275 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-config-data\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933304 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzxh\" (UniqueName: \"kubernetes.io/projected/bf61107f-cf86-48d8-a9db-bde098c122f0-kube-api-access-bhzxh\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933453 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933493 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bf61107f-cf86-48d8-a9db-bde098c122f0-ceph\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.933550 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:43 crc kubenswrapper[4813]: I1202 11:09:43.936846 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-run\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035349 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6946f6db-913a-4505-b3db-e96e89534a35-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035382 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-run\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035513 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035539 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-scripts\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035559 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035580 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jkz2\" (UniqueName: \"kubernetes.io/projected/6946f6db-913a-4505-b3db-e96e89534a35-kube-api-access-4jkz2\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035605 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035634 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-lib-modules\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035770 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-config-data\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035814 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzxh\" (UniqueName: \"kubernetes.io/projected/bf61107f-cf86-48d8-a9db-bde098c122f0-kube-api-access-bhzxh\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035834 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.035858 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-run\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.036541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-lib-modules\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.036768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.036844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bf61107f-cf86-48d8-a9db-bde098c122f0-ceph\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.036878 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.036951 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.036992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037015 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037089 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-sys\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037155 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-dev\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037192 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.036929 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037357 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037456 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037499 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-dev\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037532 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.037617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf61107f-cf86-48d8-a9db-bde098c122f0-sys\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.047249 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.047499 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-config-data\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.050432 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bf61107f-cf86-48d8-a9db-bde098c122f0-ceph\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.052659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-scripts\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.062897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzxh\" (UniqueName: \"kubernetes.io/projected/bf61107f-cf86-48d8-a9db-bde098c122f0-kube-api-access-bhzxh\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.066805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf61107f-cf86-48d8-a9db-bde098c122f0-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bf61107f-cf86-48d8-a9db-bde098c122f0\") " pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.138642 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.139149 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.139302 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.141992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-run\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.139476 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142337 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-run\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142693 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142936 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142966 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.142990 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6946f6db-913a-4505-b3db-e96e89534a35-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143033 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143089 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143118 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jkz2\" (UniqueName: \"kubernetes.io/projected/6946f6db-913a-4505-b3db-e96e89534a35-kube-api-access-4jkz2\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143113 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143214 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143282 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143565 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.143723 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.144034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.144105 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6946f6db-913a-4505-b3db-e96e89534a35-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.145151 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.146171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.147565 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.149611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6946f6db-913a-4505-b3db-e96e89534a35-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.150967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6946f6db-913a-4505-b3db-e96e89534a35-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.158660 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.165903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jkz2\" (UniqueName: \"kubernetes.io/projected/6946f6db-913a-4505-b3db-e96e89534a35-kube-api-access-4jkz2\") pod \"cinder-volume-volume1-0\" (UID: \"6946f6db-913a-4505-b3db-e96e89534a35\") " pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.240112 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.397661 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-fl2xz"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.400532 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.422971 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-fl2xz"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.457309 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1edbccb4-0310-4649-aede-b296ed4dbf23-operator-scripts\") pod \"manila-db-create-fl2xz\" (UID: \"1edbccb4-0310-4649-aede-b296ed4dbf23\") " pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.457505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqx4v\" (UniqueName: \"kubernetes.io/projected/1edbccb4-0310-4649-aede-b296ed4dbf23-kube-api-access-nqx4v\") pod \"manila-db-create-fl2xz\" (UID: \"1edbccb4-0310-4649-aede-b296ed4dbf23\") " pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.465936 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-d1b0-account-create-update-z9nlm"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.467553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.473014 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.519731 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d1b0-account-create-update-z9nlm"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.561761 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqx4v\" (UniqueName: \"kubernetes.io/projected/1edbccb4-0310-4649-aede-b296ed4dbf23-kube-api-access-nqx4v\") pod \"manila-db-create-fl2xz\" (UID: \"1edbccb4-0310-4649-aede-b296ed4dbf23\") " pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.561849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slvl\" (UniqueName: \"kubernetes.io/projected/dbd894e4-9c8a-4553-953a-c954003e97cb-kube-api-access-6slvl\") pod \"manila-d1b0-account-create-update-z9nlm\" (UID: \"dbd894e4-9c8a-4553-953a-c954003e97cb\") " pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.561972 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbd894e4-9c8a-4553-953a-c954003e97cb-operator-scripts\") pod \"manila-d1b0-account-create-update-z9nlm\" (UID: \"dbd894e4-9c8a-4553-953a-c954003e97cb\") " pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.562042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1edbccb4-0310-4649-aede-b296ed4dbf23-operator-scripts\") pod \"manila-db-create-fl2xz\" (UID: \"1edbccb4-0310-4649-aede-b296ed4dbf23\") " pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.563037 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1edbccb4-0310-4649-aede-b296ed4dbf23-operator-scripts\") pod \"manila-db-create-fl2xz\" (UID: \"1edbccb4-0310-4649-aede-b296ed4dbf23\") " pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.563310 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d89848547-h6sc5"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.565541 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.568911 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.569238 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-ks5q9" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.569401 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.569543 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.587314 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d89848547-h6sc5"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.617837 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqx4v\" (UniqueName: \"kubernetes.io/projected/1edbccb4-0310-4649-aede-b296ed4dbf23-kube-api-access-nqx4v\") pod \"manila-db-create-fl2xz\" (UID: \"1edbccb4-0310-4649-aede-b296ed4dbf23\") " pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.621240 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.624934 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.630680 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.630777 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j2c5w" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.631038 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.631260 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6slvl\" (UniqueName: \"kubernetes.io/projected/dbd894e4-9c8a-4553-953a-c954003e97cb-kube-api-access-6slvl\") pod \"manila-d1b0-account-create-update-z9nlm\" (UID: \"dbd894e4-9c8a-4553-953a-c954003e97cb\") " pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665380 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-logs\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665405 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-config-data\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9wd\" (UniqueName: \"kubernetes.io/projected/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-kube-api-access-wk9wd\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665462 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-ceph\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-config-data\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665516 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7lj\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-kube-api-access-ch7lj\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbd894e4-9c8a-4553-953a-c954003e97cb-operator-scripts\") pod \"manila-d1b0-account-create-update-z9nlm\" (UID: \"dbd894e4-9c8a-4553-953a-c954003e97cb\") " pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665559 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-scripts\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.665596 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.666261 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbd894e4-9c8a-4553-953a-c954003e97cb-operator-scripts\") pod \"manila-d1b0-account-create-update-z9nlm\" (UID: \"dbd894e4-9c8a-4553-953a-c954003e97cb\") " pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.666383 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-logs\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.666535 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-horizon-secret-key\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.666638 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-scripts\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.666688 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.680590 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.703766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6slvl\" (UniqueName: \"kubernetes.io/projected/dbd894e4-9c8a-4553-953a-c954003e97cb-kube-api-access-6slvl\") pod \"manila-d1b0-account-create-update-z9nlm\" (UID: \"dbd894e4-9c8a-4553-953a-c954003e97cb\") " pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.730309 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.743429 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fc4d895dc-zfl87"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.745470 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.763503 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fc4d895dc-zfl87"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.767924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.767977 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.767994 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-logs\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768017 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-config-data\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768041 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9wd\" (UniqueName: \"kubernetes.io/projected/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-kube-api-access-wk9wd\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768094 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-ceph\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768126 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-config-data\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768146 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7lj\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-kube-api-access-ch7lj\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-scripts\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768203 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-scripts\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768225 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-horizon-secret-key\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768244 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-logs\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768287 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-horizon-secret-key\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768323 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-scripts\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768342 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-logs\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768360 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768384 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-config-data\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcgfb\" (UniqueName: \"kubernetes.io/projected/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-kube-api-access-bcgfb\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.768818 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.778323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.778666 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-logs\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.779194 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-config-data\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.779296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-logs\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.779602 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-scripts\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.785442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-horizon-secret-key\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.786002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.787309 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-scripts\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.793596 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.794827 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-config-data\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.799786 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.801223 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-ceph\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.801469 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9wd\" (UniqueName: \"kubernetes.io/projected/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-kube-api-access-wk9wd\") pod \"horizon-7d89848547-h6sc5\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.809392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7lj\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-kube-api-access-ch7lj\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.816716 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.846194 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.847714 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.850543 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.850651 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.873405 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-scripts\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.877639 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-scripts\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.880240 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-horizon-secret-key\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.880386 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-logs\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.880440 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-config-data\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.880475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcgfb\" (UniqueName: \"kubernetes.io/projected/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-kube-api-access-bcgfb\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.881776 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-config-data\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.882386 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-logs\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.885244 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-horizon-secret-key\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.885549 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.902040 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcgfb\" (UniqueName: \"kubernetes.io/projected/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-kube-api-access-bcgfb\") pod \"horizon-5fc4d895dc-zfl87\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.906301 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.967747 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.985714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.985849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.985899 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.985933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.985967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v644c\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-kube-api-access-v644c\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.986004 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-logs\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.986061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.986127 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:44 crc kubenswrapper[4813]: I1202 11:09:44.986152 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.018902 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.095582 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-logs\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.095680 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.095745 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.095765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.095785 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.095889 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.095948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.095977 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.096016 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v644c\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-kube-api-access-v644c\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.096878 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-logs\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.114484 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.117400 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.126187 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.128383 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.146293 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.146718 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.162227 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v644c\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-kube-api-access-v644c\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.167651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.192555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.409346 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.435909 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bf61107f-cf86-48d8-a9db-bde098c122f0","Type":"ContainerStarted","Data":"802195c2742a2d0a4126cda08e75f864c5082ea0df296b7ccaa419a2125a966a"} Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.442581 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.482382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.552604 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d1b0-account-create-update-z9nlm"] Dec 02 11:09:45 crc kubenswrapper[4813]: W1202 11:09:45.557859 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbd894e4_9c8a_4553_953a_c954003e97cb.slice/crio-5980f8f5b305e9c1590a0b9e80b1ea8eb98b12845eb9e0c0004abb4dc3bf7ceb WatchSource:0}: Error finding container 5980f8f5b305e9c1590a0b9e80b1ea8eb98b12845eb9e0c0004abb4dc3bf7ceb: Status 404 returned error can't find the container with id 5980f8f5b305e9c1590a0b9e80b1ea8eb98b12845eb9e0c0004abb4dc3bf7ceb Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.576391 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-fl2xz"] Dec 02 11:09:45 crc kubenswrapper[4813]: W1202 11:09:45.592802 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edbccb4_0310_4649_aede_b296ed4dbf23.slice/crio-385c847a837db7fdb972dcba5065dfd37a4f7f5c6de8092431cde9dd00464338 WatchSource:0}: Error finding container 385c847a837db7fdb972dcba5065dfd37a4f7f5c6de8092431cde9dd00464338: Status 404 returned error can't find the container with id 385c847a837db7fdb972dcba5065dfd37a4f7f5c6de8092431cde9dd00464338 Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.840786 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d89848547-h6sc5"] Dec 02 11:09:45 crc kubenswrapper[4813]: W1202 11:09:45.840778 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaeda5ec3_5da8_4aca_9ecf_c8cc4c352936.slice/crio-6561edfebff343eb0a20b98307f0db66b984c0bae90fedd26e7317d89a7608a1 WatchSource:0}: Error finding container 6561edfebff343eb0a20b98307f0db66b984c0bae90fedd26e7317d89a7608a1: Status 404 returned error can't find the container with id 6561edfebff343eb0a20b98307f0db66b984c0bae90fedd26e7317d89a7608a1 Dec 02 11:09:45 crc kubenswrapper[4813]: W1202 11:09:45.923086 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd521e7fa_e5b4_4fd3_8882_27af5fb803b3.slice/crio-d2d4b329d104bdf831ad97c69dee5f7f54c5e28990b243b545ce75817eb8cac4 WatchSource:0}: Error finding container d2d4b329d104bdf831ad97c69dee5f7f54c5e28990b243b545ce75817eb8cac4: Status 404 returned error can't find the container with id d2d4b329d104bdf831ad97c69dee5f7f54c5e28990b243b545ce75817eb8cac4 Dec 02 11:09:45 crc kubenswrapper[4813]: I1202 11:09:45.928201 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fc4d895dc-zfl87"] Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.067979 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.185326 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:46 crc kubenswrapper[4813]: W1202 11:09:46.234986 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc637ad0a_77d6_47fa_b627_6b0aad3f4793.slice/crio-e0fee95b4ec887cf0b5c1c2c6f115ec34c7040f2b514f7d8ab308187b7eade7a WatchSource:0}: Error finding container e0fee95b4ec887cf0b5c1c2c6f115ec34c7040f2b514f7d8ab308187b7eade7a: Status 404 returned error can't find the container with id e0fee95b4ec887cf0b5c1c2c6f115ec34c7040f2b514f7d8ab308187b7eade7a Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.467509 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c637ad0a-77d6-47fa-b627-6b0aad3f4793","Type":"ContainerStarted","Data":"e0fee95b4ec887cf0b5c1c2c6f115ec34c7040f2b514f7d8ab308187b7eade7a"} Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.469449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d89848547-h6sc5" event={"ID":"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936","Type":"ContainerStarted","Data":"6561edfebff343eb0a20b98307f0db66b984c0bae90fedd26e7317d89a7608a1"} Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.471155 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"6946f6db-913a-4505-b3db-e96e89534a35","Type":"ContainerStarted","Data":"ff4390da5fd317cb92e14ab980b2e17c2db46d532bfd799b9df48d138b5354c4"} Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.472208 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5586bb88-d30a-427f-8834-006660531ae8","Type":"ContainerStarted","Data":"ea1455a0a593b2e51fb7ffea957bb6f649aac94bcf1cae7476b5d667006ea4cd"} Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.475405 4813 generic.go:334] "Generic (PLEG): container finished" podID="1edbccb4-0310-4649-aede-b296ed4dbf23" containerID="90d2fa373e19cc2cc1a421eaa83f90b15d4b6c736eb859c4e994b8f1b1c6a40d" exitCode=0 Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.475465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-fl2xz" event={"ID":"1edbccb4-0310-4649-aede-b296ed4dbf23","Type":"ContainerDied","Data":"90d2fa373e19cc2cc1a421eaa83f90b15d4b6c736eb859c4e994b8f1b1c6a40d"} Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.475487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-fl2xz" event={"ID":"1edbccb4-0310-4649-aede-b296ed4dbf23","Type":"ContainerStarted","Data":"385c847a837db7fdb972dcba5065dfd37a4f7f5c6de8092431cde9dd00464338"} Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.479060 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fc4d895dc-zfl87" event={"ID":"d521e7fa-e5b4-4fd3-8882-27af5fb803b3","Type":"ContainerStarted","Data":"d2d4b329d104bdf831ad97c69dee5f7f54c5e28990b243b545ce75817eb8cac4"} Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.483421 4813 generic.go:334] "Generic (PLEG): container finished" podID="dbd894e4-9c8a-4553-953a-c954003e97cb" containerID="b3593bb1cabc8dd56962de5f743b403cd9ae20fc6bad153fba94b7f33d8b7691" exitCode=0 Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.483477 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d1b0-account-create-update-z9nlm" event={"ID":"dbd894e4-9c8a-4553-953a-c954003e97cb","Type":"ContainerDied","Data":"b3593bb1cabc8dd56962de5f743b403cd9ae20fc6bad153fba94b7f33d8b7691"} Dec 02 11:09:46 crc kubenswrapper[4813]: I1202 11:09:46.483510 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d1b0-account-create-update-z9nlm" event={"ID":"dbd894e4-9c8a-4553-953a-c954003e97cb","Type":"ContainerStarted","Data":"5980f8f5b305e9c1590a0b9e80b1ea8eb98b12845eb9e0c0004abb4dc3bf7ceb"} Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.068923 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:09:47 crc kubenswrapper[4813]: E1202 11:09:47.070890 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.452981 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fc4d895dc-zfl87"] Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.553944 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77cfc9896b-llw2g"] Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.569781 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.577910 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.659347 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77cfc9896b-llw2g"] Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.685083 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bf61107f-cf86-48d8-a9db-bde098c122f0","Type":"ContainerStarted","Data":"547c2537f27d64eedba933b21e12e5d9a3299bd86f3e33635c8f8a1493fa452f"} Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.685142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bf61107f-cf86-48d8-a9db-bde098c122f0","Type":"ContainerStarted","Data":"2764bdfa842a3c62f531b8509c48ddcae87bafb5fccfbc4b3190136aac524222"} Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.690705 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c637ad0a-77d6-47fa-b627-6b0aad3f4793","Type":"ContainerStarted","Data":"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c"} Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.696858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"6946f6db-913a-4505-b3db-e96e89534a35","Type":"ContainerStarted","Data":"14266ecf8b366c409f3f036e265d5a0e781d2cc9a7710d06c7fcf4f29e81b20f"} Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.697168 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"6946f6db-913a-4505-b3db-e96e89534a35","Type":"ContainerStarted","Data":"01385ee634ac3cba5c3914ad6838f44cd3c648b6f19e785b2ed71def018095b0"} Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.703514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5586bb88-d30a-427f-8834-006660531ae8","Type":"ContainerStarted","Data":"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68"} Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.722131 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.738387 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-config-data\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.738456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77l7g\" (UniqueName: \"kubernetes.io/projected/783caf3f-632f-4ee5-9ace-b9337879d5c0-kube-api-access-77l7g\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.738688 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-secret-key\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.738929 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-combined-ca-bundle\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.739194 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-tls-certs\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.739313 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783caf3f-632f-4ee5-9ace-b9337879d5c0-logs\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.739449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-scripts\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.755715 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d89848547-h6sc5"] Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.769625 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.782754 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c5ddb87c8-5vtbk"] Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.787258 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.805837 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c5ddb87c8-5vtbk"] Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.848061 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-config-data\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.848150 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77l7g\" (UniqueName: \"kubernetes.io/projected/783caf3f-632f-4ee5-9ace-b9337879d5c0-kube-api-access-77l7g\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.848219 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-secret-key\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.848281 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-combined-ca-bundle\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.848346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-tls-certs\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.848380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783caf3f-632f-4ee5-9ace-b9337879d5c0-logs\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.848412 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-scripts\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.849533 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-scripts\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.860846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783caf3f-632f-4ee5-9ace-b9337879d5c0-logs\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.862749 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.103160668 podStartE2EDuration="4.862729554s" podCreationTimestamp="2025-12-02 11:09:43 +0000 UTC" firstStartedPulling="2025-12-02 11:09:44.99013207 +0000 UTC m=+3709.185306372" lastFinishedPulling="2025-12-02 11:09:46.749700956 +0000 UTC m=+3710.944875258" observedRunningTime="2025-12-02 11:09:47.713820797 +0000 UTC m=+3711.908995099" watchObservedRunningTime="2025-12-02 11:09:47.862729554 +0000 UTC m=+3712.057903846" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.868377 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-config-data\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.878657 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.612209564 podStartE2EDuration="4.878630745s" podCreationTimestamp="2025-12-02 11:09:43 +0000 UTC" firstStartedPulling="2025-12-02 11:09:45.484784677 +0000 UTC m=+3709.679958979" lastFinishedPulling="2025-12-02 11:09:46.751205868 +0000 UTC m=+3710.946380160" observedRunningTime="2025-12-02 11:09:47.756692614 +0000 UTC m=+3711.951866916" watchObservedRunningTime="2025-12-02 11:09:47.878630745 +0000 UTC m=+3712.073805047" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.879535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-combined-ca-bundle\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.882125 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77l7g\" (UniqueName: \"kubernetes.io/projected/783caf3f-632f-4ee5-9ace-b9337879d5c0-kube-api-access-77l7g\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.930769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-tls-certs\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.934457 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-secret-key\") pod \"horizon-77cfc9896b-llw2g\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.956196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b22j\" (UniqueName: \"kubernetes.io/projected/757d290c-ab26-4557-a758-10924585a86b-kube-api-access-5b22j\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.956672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/757d290c-ab26-4557-a758-10924585a86b-scripts\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.956703 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/757d290c-ab26-4557-a758-10924585a86b-config-data\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.956757 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-combined-ca-bundle\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.956805 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-horizon-tls-certs\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.956837 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/757d290c-ab26-4557-a758-10924585a86b-logs\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.956876 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-horizon-secret-key\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:47 crc kubenswrapper[4813]: I1202 11:09:47.975716 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.058367 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-combined-ca-bundle\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.058478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-horizon-tls-certs\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.058522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/757d290c-ab26-4557-a758-10924585a86b-logs\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.058571 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-horizon-secret-key\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.058636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b22j\" (UniqueName: \"kubernetes.io/projected/757d290c-ab26-4557-a758-10924585a86b-kube-api-access-5b22j\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.058686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/757d290c-ab26-4557-a758-10924585a86b-scripts\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.058716 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/757d290c-ab26-4557-a758-10924585a86b-config-data\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.060136 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/757d290c-ab26-4557-a758-10924585a86b-config-data\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.090450 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-combined-ca-bundle\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.090750 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/757d290c-ab26-4557-a758-10924585a86b-scripts\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.091135 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/757d290c-ab26-4557-a758-10924585a86b-logs\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.096775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-horizon-tls-certs\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.098584 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/757d290c-ab26-4557-a758-10924585a86b-horizon-secret-key\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.168068 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b22j\" (UniqueName: \"kubernetes.io/projected/757d290c-ab26-4557-a758-10924585a86b-kube-api-access-5b22j\") pod \"horizon-c5ddb87c8-5vtbk\" (UID: \"757d290c-ab26-4557-a758-10924585a86b\") " pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.422740 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.428484 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.476498 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.572958 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqx4v\" (UniqueName: \"kubernetes.io/projected/1edbccb4-0310-4649-aede-b296ed4dbf23-kube-api-access-nqx4v\") pod \"1edbccb4-0310-4649-aede-b296ed4dbf23\" (UID: \"1edbccb4-0310-4649-aede-b296ed4dbf23\") " Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.573377 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1edbccb4-0310-4649-aede-b296ed4dbf23-operator-scripts\") pod \"1edbccb4-0310-4649-aede-b296ed4dbf23\" (UID: \"1edbccb4-0310-4649-aede-b296ed4dbf23\") " Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.573497 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6slvl\" (UniqueName: \"kubernetes.io/projected/dbd894e4-9c8a-4553-953a-c954003e97cb-kube-api-access-6slvl\") pod \"dbd894e4-9c8a-4553-953a-c954003e97cb\" (UID: \"dbd894e4-9c8a-4553-953a-c954003e97cb\") " Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.573559 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbd894e4-9c8a-4553-953a-c954003e97cb-operator-scripts\") pod \"dbd894e4-9c8a-4553-953a-c954003e97cb\" (UID: \"dbd894e4-9c8a-4553-953a-c954003e97cb\") " Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.575279 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbd894e4-9c8a-4553-953a-c954003e97cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbd894e4-9c8a-4553-953a-c954003e97cb" (UID: "dbd894e4-9c8a-4553-953a-c954003e97cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.575973 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1edbccb4-0310-4649-aede-b296ed4dbf23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1edbccb4-0310-4649-aede-b296ed4dbf23" (UID: "1edbccb4-0310-4649-aede-b296ed4dbf23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.583105 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edbccb4-0310-4649-aede-b296ed4dbf23-kube-api-access-nqx4v" (OuterVolumeSpecName: "kube-api-access-nqx4v") pod "1edbccb4-0310-4649-aede-b296ed4dbf23" (UID: "1edbccb4-0310-4649-aede-b296ed4dbf23"). InnerVolumeSpecName "kube-api-access-nqx4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.585318 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd894e4-9c8a-4553-953a-c954003e97cb-kube-api-access-6slvl" (OuterVolumeSpecName: "kube-api-access-6slvl") pod "dbd894e4-9c8a-4553-953a-c954003e97cb" (UID: "dbd894e4-9c8a-4553-953a-c954003e97cb"). InnerVolumeSpecName "kube-api-access-6slvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.676658 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1edbccb4-0310-4649-aede-b296ed4dbf23-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.676707 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6slvl\" (UniqueName: \"kubernetes.io/projected/dbd894e4-9c8a-4553-953a-c954003e97cb-kube-api-access-6slvl\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.676721 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbd894e4-9c8a-4553-953a-c954003e97cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.676733 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqx4v\" (UniqueName: \"kubernetes.io/projected/1edbccb4-0310-4649-aede-b296ed4dbf23-kube-api-access-nqx4v\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.728378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c637ad0a-77d6-47fa-b627-6b0aad3f4793","Type":"ContainerStarted","Data":"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b"} Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.728856 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerName="glance-log" containerID="cri-o://bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c" gracePeriod=30 Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.729480 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerName="glance-httpd" containerID="cri-o://aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b" gracePeriod=30 Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.768369 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5586bb88-d30a-427f-8834-006660531ae8","Type":"ContainerStarted","Data":"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0"} Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.768549 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5586bb88-d30a-427f-8834-006660531ae8" containerName="glance-log" containerID="cri-o://b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68" gracePeriod=30 Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.770066 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5586bb88-d30a-427f-8834-006660531ae8" containerName="glance-httpd" containerID="cri-o://53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0" gracePeriod=30 Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.780251 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77cfc9896b-llw2g"] Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.782127 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-fl2xz" event={"ID":"1edbccb4-0310-4649-aede-b296ed4dbf23","Type":"ContainerDied","Data":"385c847a837db7fdb972dcba5065dfd37a4f7f5c6de8092431cde9dd00464338"} Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.782183 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="385c847a837db7fdb972dcba5065dfd37a4f7f5c6de8092431cde9dd00464338" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.782280 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-fl2xz" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.786297 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d1b0-account-create-update-z9nlm" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.787937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d1b0-account-create-update-z9nlm" event={"ID":"dbd894e4-9c8a-4553-953a-c954003e97cb","Type":"ContainerDied","Data":"5980f8f5b305e9c1590a0b9e80b1ea8eb98b12845eb9e0c0004abb4dc3bf7ceb"} Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.787975 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5980f8f5b305e9c1590a0b9e80b1ea8eb98b12845eb9e0c0004abb4dc3bf7ceb" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.799201 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.799180079 podStartE2EDuration="4.799180079s" podCreationTimestamp="2025-12-02 11:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:09:48.762004954 +0000 UTC m=+3712.957179246" watchObservedRunningTime="2025-12-02 11:09:48.799180079 +0000 UTC m=+3712.994354381" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.829736 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.829690565 podStartE2EDuration="4.829690565s" podCreationTimestamp="2025-12-02 11:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:09:48.793697823 +0000 UTC m=+3712.988872135" watchObservedRunningTime="2025-12-02 11:09:48.829690565 +0000 UTC m=+3713.024864887" Dec 02 11:09:48 crc kubenswrapper[4813]: I1202 11:09:48.993928 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c5ddb87c8-5vtbk"] Dec 02 11:09:49 crc kubenswrapper[4813]: W1202 11:09:49.033975 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod757d290c_ab26_4557_a758_10924585a86b.slice/crio-2444a676dcdabbe1e0c88dbf7c7b67a9ebb16b830ade59a99674e183b5274e11 WatchSource:0}: Error finding container 2444a676dcdabbe1e0c88dbf7c7b67a9ebb16b830ade59a99674e183b5274e11: Status 404 returned error can't find the container with id 2444a676dcdabbe1e0c88dbf7c7b67a9ebb16b830ade59a99674e183b5274e11 Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.159541 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.240884 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.558793 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.709155 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-httpd-run\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.710125 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.710577 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.710730 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-logs\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.710924 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v644c\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-kube-api-access-v644c\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.710981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-combined-ca-bundle\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.711017 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-ceph\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.711092 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-config-data\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.711204 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-scripts\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.711315 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-internal-tls-certs\") pod \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\" (UID: \"c637ad0a-77d6-47fa-b627-6b0aad3f4793\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.711887 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-logs" (OuterVolumeSpecName: "logs") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.712438 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.712463 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c637ad0a-77d6-47fa-b627-6b0aad3f4793-logs\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.723462 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-ceph" (OuterVolumeSpecName: "ceph") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.723941 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.746690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-scripts" (OuterVolumeSpecName: "scripts") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.749371 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-kube-api-access-v644c" (OuterVolumeSpecName: "kube-api-access-v644c") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "kube-api-access-v644c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.821208 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v644c\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-kube-api-access-v644c\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.821250 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c637ad0a-77d6-47fa-b627-6b0aad3f4793-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.821260 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.821296 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.821697 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.824698 4813 generic.go:334] "Generic (PLEG): container finished" podID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerID="aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b" exitCode=143 Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.824729 4813 generic.go:334] "Generic (PLEG): container finished" podID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerID="bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c" exitCode=143 Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.824811 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.826769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c637ad0a-77d6-47fa-b627-6b0aad3f4793","Type":"ContainerDied","Data":"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b"} Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.826833 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c637ad0a-77d6-47fa-b627-6b0aad3f4793","Type":"ContainerDied","Data":"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c"} Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.826844 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c637ad0a-77d6-47fa-b627-6b0aad3f4793","Type":"ContainerDied","Data":"e0fee95b4ec887cf0b5c1c2c6f115ec34c7040f2b514f7d8ab308187b7eade7a"} Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.826861 4813 scope.go:117] "RemoveContainer" containerID="aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.853403 4813 generic.go:334] "Generic (PLEG): container finished" podID="5586bb88-d30a-427f-8834-006660531ae8" containerID="53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0" exitCode=143 Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.853449 4813 generic.go:334] "Generic (PLEG): container finished" podID="5586bb88-d30a-427f-8834-006660531ae8" containerID="b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68" exitCode=143 Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.853555 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5586bb88-d30a-427f-8834-006660531ae8","Type":"ContainerDied","Data":"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0"} Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.853589 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5586bb88-d30a-427f-8834-006660531ae8","Type":"ContainerDied","Data":"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68"} Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.853605 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5586bb88-d30a-427f-8834-006660531ae8","Type":"ContainerDied","Data":"ea1455a0a593b2e51fb7ffea957bb6f649aac94bcf1cae7476b5d667006ea4cd"} Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.853671 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.872190 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.876478 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.879328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.880850 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-config-data" (OuterVolumeSpecName: "config-data") pod "c637ad0a-77d6-47fa-b627-6b0aad3f4793" (UID: "c637ad0a-77d6-47fa-b627-6b0aad3f4793"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.882769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5ddb87c8-5vtbk" event={"ID":"757d290c-ab26-4557-a758-10924585a86b","Type":"ContainerStarted","Data":"2444a676dcdabbe1e0c88dbf7c7b67a9ebb16b830ade59a99674e183b5274e11"} Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.889452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77cfc9896b-llw2g" event={"ID":"783caf3f-632f-4ee5-9ace-b9337879d5c0","Type":"ContainerStarted","Data":"11acdd7e9bff8a992e5c598f21c773c0c8aaf8d8132a113ad58e533a98217120"} Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.907243 4813 scope.go:117] "RemoveContainer" containerID="bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.922953 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-ceph\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.923015 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-public-tls-certs\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.923111 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-combined-ca-bundle\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.923165 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-scripts\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.923225 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-logs\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.923265 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch7lj\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-kube-api-access-ch7lj\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.923404 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.923462 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-config-data\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.923540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-httpd-run\") pod \"5586bb88-d30a-427f-8834-006660531ae8\" (UID: \"5586bb88-d30a-427f-8834-006660531ae8\") " Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.924222 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.924251 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.924264 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c637ad0a-77d6-47fa-b627-6b0aad3f4793-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.924277 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.925351 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.926976 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-logs" (OuterVolumeSpecName: "logs") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.935234 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.935373 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-kube-api-access-ch7lj" (OuterVolumeSpecName: "kube-api-access-ch7lj") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "kube-api-access-ch7lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.935681 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-scripts" (OuterVolumeSpecName: "scripts") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.939371 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-ceph" (OuterVolumeSpecName: "ceph") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.966452 4813 scope.go:117] "RemoveContainer" containerID="aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b" Dec 02 11:09:49 crc kubenswrapper[4813]: E1202 11:09:49.968689 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b\": container with ID starting with aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b not found: ID does not exist" containerID="aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.968749 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b"} err="failed to get container status \"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b\": rpc error: code = NotFound desc = could not find container \"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b\": container with ID starting with aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b not found: ID does not exist" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.968782 4813 scope.go:117] "RemoveContainer" containerID="bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c" Dec 02 11:09:49 crc kubenswrapper[4813]: E1202 11:09:49.969576 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c\": container with ID starting with bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c not found: ID does not exist" containerID="bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.969613 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c"} err="failed to get container status \"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c\": rpc error: code = NotFound desc = could not find container \"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c\": container with ID starting with bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c not found: ID does not exist" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.969633 4813 scope.go:117] "RemoveContainer" containerID="aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.969916 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b"} err="failed to get container status \"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b\": rpc error: code = NotFound desc = could not find container \"aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b\": container with ID starting with aa999fad33b7854a45be11bb013a519a22a2943a11c013ff6bcac6119eea9c5b not found: ID does not exist" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.969938 4813 scope.go:117] "RemoveContainer" containerID="bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.970220 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c"} err="failed to get container status \"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c\": rpc error: code = NotFound desc = could not find container \"bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c\": container with ID starting with bbe504b7db01a10f7f9f139045733cb18017153f6d598c6d0795374c447c039c not found: ID does not exist" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.970242 4813 scope.go:117] "RemoveContainer" containerID="53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.971888 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:49 crc kubenswrapper[4813]: I1202 11:09:49.996957 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.019272 4813 scope.go:117] "RemoveContainer" containerID="b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.028065 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.028150 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.028159 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.028167 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.028194 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-logs\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.028206 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch7lj\" (UniqueName: \"kubernetes.io/projected/5586bb88-d30a-427f-8834-006660531ae8-kube-api-access-ch7lj\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.028226 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.028236 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5586bb88-d30a-427f-8834-006660531ae8-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.052653 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.058273 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-config-data" (OuterVolumeSpecName: "config-data") pod "5586bb88-d30a-427f-8834-006660531ae8" (UID: "5586bb88-d30a-427f-8834-006660531ae8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.071450 4813 scope.go:117] "RemoveContainer" containerID="53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0" Dec 02 11:09:50 crc kubenswrapper[4813]: E1202 11:09:50.071836 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0\": container with ID starting with 53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0 not found: ID does not exist" containerID="53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.071865 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0"} err="failed to get container status \"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0\": rpc error: code = NotFound desc = could not find container \"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0\": container with ID starting with 53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0 not found: ID does not exist" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.071885 4813 scope.go:117] "RemoveContainer" containerID="b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68" Dec 02 11:09:50 crc kubenswrapper[4813]: E1202 11:09:50.072486 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68\": container with ID starting with b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68 not found: ID does not exist" containerID="b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.072549 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68"} err="failed to get container status \"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68\": rpc error: code = NotFound desc = could not find container \"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68\": container with ID starting with b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68 not found: ID does not exist" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.072573 4813 scope.go:117] "RemoveContainer" containerID="53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.072987 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0"} err="failed to get container status \"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0\": rpc error: code = NotFound desc = could not find container \"53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0\": container with ID starting with 53c8cab0dec07ab85ec433d01a77cedf1b80da7a91abcd5d72a5eb684dc790b0 not found: ID does not exist" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.073006 4813 scope.go:117] "RemoveContainer" containerID="b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.077949 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68"} err="failed to get container status \"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68\": rpc error: code = NotFound desc = could not find container \"b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68\": container with ID starting with b250a22a283815ab1128540f97c4a6ec9c56ecaa106fb06b6f7b77231e9fab68 not found: ID does not exist" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.131136 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.131492 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5586bb88-d30a-427f-8834-006660531ae8-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.223942 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.232289 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.252567 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.271131 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291137 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:50 crc kubenswrapper[4813]: E1202 11:09:50.291596 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerName="glance-httpd" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291621 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerName="glance-httpd" Dec 02 11:09:50 crc kubenswrapper[4813]: E1202 11:09:50.291636 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5586bb88-d30a-427f-8834-006660531ae8" containerName="glance-log" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291642 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5586bb88-d30a-427f-8834-006660531ae8" containerName="glance-log" Dec 02 11:09:50 crc kubenswrapper[4813]: E1202 11:09:50.291674 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5586bb88-d30a-427f-8834-006660531ae8" containerName="glance-httpd" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291683 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5586bb88-d30a-427f-8834-006660531ae8" containerName="glance-httpd" Dec 02 11:09:50 crc kubenswrapper[4813]: E1202 11:09:50.291701 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd894e4-9c8a-4553-953a-c954003e97cb" containerName="mariadb-account-create-update" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291710 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd894e4-9c8a-4553-953a-c954003e97cb" containerName="mariadb-account-create-update" Dec 02 11:09:50 crc kubenswrapper[4813]: E1202 11:09:50.291728 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edbccb4-0310-4649-aede-b296ed4dbf23" containerName="mariadb-database-create" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291734 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edbccb4-0310-4649-aede-b296ed4dbf23" containerName="mariadb-database-create" Dec 02 11:09:50 crc kubenswrapper[4813]: E1202 11:09:50.291745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerName="glance-log" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291751 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerName="glance-log" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291921 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edbccb4-0310-4649-aede-b296ed4dbf23" containerName="mariadb-database-create" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291933 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerName="glance-log" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291941 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5586bb88-d30a-427f-8834-006660531ae8" containerName="glance-httpd" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291949 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd894e4-9c8a-4553-953a-c954003e97cb" containerName="mariadb-account-create-update" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291964 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5586bb88-d30a-427f-8834-006660531ae8" containerName="glance-log" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.291975 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" containerName="glance-httpd" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.293090 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.298630 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.298817 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.298956 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.299217 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j2c5w" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.303936 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.328395 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.330554 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.336708 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.336732 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.363379 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.439427 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b990be9-d837-4418-8909-3b050114af00-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.439506 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9vp\" (UniqueName: \"kubernetes.io/projected/78c39f26-5444-4386-99f7-f672f7554931-kube-api-access-fz9vp\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.439543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78c39f26-5444-4386-99f7-f672f7554931-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.439614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.439647 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b990be9-d837-4418-8909-3b050114af00-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.439679 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.439729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c39f26-5444-4386-99f7-f672f7554931-logs\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.440019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-scripts\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.440099 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.440306 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grzjf\" (UniqueName: \"kubernetes.io/projected/7b990be9-d837-4418-8909-3b050114af00-kube-api-access-grzjf\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.440441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/78c39f26-5444-4386-99f7-f672f7554931-ceph\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.440676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.440767 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.440810 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-config-data\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.441032 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.441153 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.441217 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b990be9-d837-4418-8909-3b050114af00-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.441326 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548464 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c39f26-5444-4386-99f7-f672f7554931-logs\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548613 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-scripts\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548649 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grzjf\" (UniqueName: \"kubernetes.io/projected/7b990be9-d837-4418-8909-3b050114af00-kube-api-access-grzjf\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548728 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/78c39f26-5444-4386-99f7-f672f7554931-ceph\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548846 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-config-data\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548917 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548940 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.548973 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b990be9-d837-4418-8909-3b050114af00-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.549004 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.549037 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b990be9-d837-4418-8909-3b050114af00-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.549088 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz9vp\" (UniqueName: \"kubernetes.io/projected/78c39f26-5444-4386-99f7-f672f7554931-kube-api-access-fz9vp\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.549119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78c39f26-5444-4386-99f7-f672f7554931-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.549120 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.549163 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.549197 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b990be9-d837-4418-8909-3b050114af00-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.549828 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b990be9-d837-4418-8909-3b050114af00-logs\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.550310 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c39f26-5444-4386-99f7-f672f7554931-logs\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.551266 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.551381 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7b990be9-d837-4418-8909-3b050114af00-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.553966 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78c39f26-5444-4386-99f7-f672f7554931-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.556886 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.557909 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.561490 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-config-data\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.562634 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-scripts\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.564558 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.572064 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c39f26-5444-4386-99f7-f672f7554931-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.576401 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7b990be9-d837-4418-8909-3b050114af00-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.577854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.578178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b990be9-d837-4418-8909-3b050114af00-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.578252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz9vp\" (UniqueName: \"kubernetes.io/projected/78c39f26-5444-4386-99f7-f672f7554931-kube-api-access-fz9vp\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.604363 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/78c39f26-5444-4386-99f7-f672f7554931-ceph\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.606691 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grzjf\" (UniqueName: \"kubernetes.io/projected/7b990be9-d837-4418-8909-3b050114af00-kube-api-access-grzjf\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.688942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7b990be9-d837-4418-8909-3b050114af00\") " pod="openstack/glance-default-internal-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.693804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"78c39f26-5444-4386-99f7-f672f7554931\") " pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.928522 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 11:09:50 crc kubenswrapper[4813]: I1202 11:09:50.973617 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 11:09:51 crc kubenswrapper[4813]: I1202 11:09:51.499648 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 11:09:51 crc kubenswrapper[4813]: I1202 11:09:51.766536 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 11:09:51 crc kubenswrapper[4813]: W1202 11:09:51.766824 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b990be9_d837_4418_8909_3b050114af00.slice/crio-12f8e8d6a91d0b5fe28bf633cefca6bbc56ac83e19734b845c02f76df8826ff8 WatchSource:0}: Error finding container 12f8e8d6a91d0b5fe28bf633cefca6bbc56ac83e19734b845c02f76df8826ff8: Status 404 returned error can't find the container with id 12f8e8d6a91d0b5fe28bf633cefca6bbc56ac83e19734b845c02f76df8826ff8 Dec 02 11:09:52 crc kubenswrapper[4813]: I1202 11:09:52.005340 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78c39f26-5444-4386-99f7-f672f7554931","Type":"ContainerStarted","Data":"1ddc1a41c01ccafc7ed62a1a7d8b42c395999a274e0de12d384f210232d5f61c"} Dec 02 11:09:52 crc kubenswrapper[4813]: I1202 11:09:52.007633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b990be9-d837-4418-8909-3b050114af00","Type":"ContainerStarted","Data":"12f8e8d6a91d0b5fe28bf633cefca6bbc56ac83e19734b845c02f76df8826ff8"} Dec 02 11:09:52 crc kubenswrapper[4813]: I1202 11:09:52.086928 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5586bb88-d30a-427f-8834-006660531ae8" path="/var/lib/kubelet/pods/5586bb88-d30a-427f-8834-006660531ae8/volumes" Dec 02 11:09:52 crc kubenswrapper[4813]: I1202 11:09:52.091571 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c637ad0a-77d6-47fa-b627-6b0aad3f4793" path="/var/lib/kubelet/pods/c637ad0a-77d6-47fa-b627-6b0aad3f4793/volumes" Dec 02 11:09:53 crc kubenswrapper[4813]: I1202 11:09:53.051364 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b990be9-d837-4418-8909-3b050114af00","Type":"ContainerStarted","Data":"5a605c00b8bfab695dd4d83623fc1b6f523e3c9e68506489ee9128e6a070f033"} Dec 02 11:09:53 crc kubenswrapper[4813]: I1202 11:09:53.054859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78c39f26-5444-4386-99f7-f672f7554931","Type":"ContainerStarted","Data":"ac53e175a4b01bbbd4fa18715bd5535eb5dda177de38f377a4d4c87a13591c5c"} Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.080827 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7b990be9-d837-4418-8909-3b050114af00","Type":"ContainerStarted","Data":"d7122691295c8ad44ffe6eccc2d3677bef76fd2dc3c1858f64afd26cc0edefb1"} Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.081455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78c39f26-5444-4386-99f7-f672f7554931","Type":"ContainerStarted","Data":"373f27e2e16a19f215c4fa490a97bd479425700b7031d9e85e60bfff89fc0f03"} Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.104542 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.104519944 podStartE2EDuration="4.104519944s" podCreationTimestamp="2025-12-02 11:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:09:54.090436365 +0000 UTC m=+3718.285610667" watchObservedRunningTime="2025-12-02 11:09:54.104519944 +0000 UTC m=+3718.299694246" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.128054 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.128038202 podStartE2EDuration="4.128038202s" podCreationTimestamp="2025-12-02 11:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:09:54.112695676 +0000 UTC m=+3718.307869978" watchObservedRunningTime="2025-12-02 11:09:54.128038202 +0000 UTC m=+3718.323212504" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.520154 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.664983 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.880745 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-p5xnm"] Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.881974 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.883757 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.886029 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-228b7" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.891699 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-p5xnm"] Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.964580 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5ld\" (UniqueName: \"kubernetes.io/projected/17bfd912-d75b-48af-8433-1b5d24d50856-kube-api-access-qx5ld\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.964960 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-combined-ca-bundle\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.965023 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-config-data\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:54 crc kubenswrapper[4813]: I1202 11:09:54.965059 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-job-config-data\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.069718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-combined-ca-bundle\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.070328 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-config-data\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.070394 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-job-config-data\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.073928 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5ld\" (UniqueName: \"kubernetes.io/projected/17bfd912-d75b-48af-8433-1b5d24d50856-kube-api-access-qx5ld\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.079274 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-combined-ca-bundle\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.079501 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-job-config-data\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.086848 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-config-data\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.092169 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5ld\" (UniqueName: \"kubernetes.io/projected/17bfd912-d75b-48af-8433-1b5d24d50856-kube-api-access-qx5ld\") pod \"manila-db-sync-p5xnm\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:55 crc kubenswrapper[4813]: I1202 11:09:55.225692 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p5xnm" Dec 02 11:09:59 crc kubenswrapper[4813]: I1202 11:09:59.068902 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:09:59 crc kubenswrapper[4813]: E1202 11:09:59.069689 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:09:59 crc kubenswrapper[4813]: I1202 11:09:59.141556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fc4d895dc-zfl87" event={"ID":"d521e7fa-e5b4-4fd3-8882-27af5fb803b3","Type":"ContainerStarted","Data":"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00"} Dec 02 11:09:59 crc kubenswrapper[4813]: I1202 11:09:59.143511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5ddb87c8-5vtbk" event={"ID":"757d290c-ab26-4557-a758-10924585a86b","Type":"ContainerStarted","Data":"338566bf8117d9dcedfc0dfc51ef3b367a48e6e4d2b6de60849f0729dbf205bf"} Dec 02 11:09:59 crc kubenswrapper[4813]: I1202 11:09:59.145757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77cfc9896b-llw2g" event={"ID":"783caf3f-632f-4ee5-9ace-b9337879d5c0","Type":"ContainerStarted","Data":"9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3"} Dec 02 11:09:59 crc kubenswrapper[4813]: I1202 11:09:59.147215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d89848547-h6sc5" event={"ID":"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936","Type":"ContainerStarted","Data":"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4"} Dec 02 11:09:59 crc kubenswrapper[4813]: I1202 11:09:59.230654 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-p5xnm"] Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.171244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p5xnm" event={"ID":"17bfd912-d75b-48af-8433-1b5d24d50856","Type":"ContainerStarted","Data":"51e22542af66771795f7f22706a7059b298959e9d55b5e040ca6754605d708e7"} Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.176886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d89848547-h6sc5" event={"ID":"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936","Type":"ContainerStarted","Data":"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a"} Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.177201 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d89848547-h6sc5" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerName="horizon-log" containerID="cri-o://c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4" gracePeriod=30 Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.178041 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d89848547-h6sc5" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerName="horizon" containerID="cri-o://b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a" gracePeriod=30 Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.183038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fc4d895dc-zfl87" event={"ID":"d521e7fa-e5b4-4fd3-8882-27af5fb803b3","Type":"ContainerStarted","Data":"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7"} Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.183104 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fc4d895dc-zfl87" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerName="horizon-log" containerID="cri-o://5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00" gracePeriod=30 Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.183217 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fc4d895dc-zfl87" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerName="horizon" containerID="cri-o://208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7" gracePeriod=30 Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.190211 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c5ddb87c8-5vtbk" event={"ID":"757d290c-ab26-4557-a758-10924585a86b","Type":"ContainerStarted","Data":"684fb10946609f1288f54c795115724897487ce116f97b77c4e94a25842aacb2"} Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.194992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77cfc9896b-llw2g" event={"ID":"783caf3f-632f-4ee5-9ace-b9337879d5c0","Type":"ContainerStarted","Data":"ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73"} Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.206879 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d89848547-h6sc5" podStartSLOduration=3.383563734 podStartE2EDuration="16.206858378s" podCreationTimestamp="2025-12-02 11:09:44 +0000 UTC" firstStartedPulling="2025-12-02 11:09:45.856522857 +0000 UTC m=+3710.051697159" lastFinishedPulling="2025-12-02 11:09:58.679817501 +0000 UTC m=+3722.874991803" observedRunningTime="2025-12-02 11:10:00.203504212 +0000 UTC m=+3724.398678524" watchObservedRunningTime="2025-12-02 11:10:00.206858378 +0000 UTC m=+3724.402032680" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.235470 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c5ddb87c8-5vtbk" podStartSLOduration=3.483553791 podStartE2EDuration="13.235445879s" podCreationTimestamp="2025-12-02 11:09:47 +0000 UTC" firstStartedPulling="2025-12-02 11:09:49.038206923 +0000 UTC m=+3713.233381225" lastFinishedPulling="2025-12-02 11:09:58.790099011 +0000 UTC m=+3722.985273313" observedRunningTime="2025-12-02 11:10:00.231030824 +0000 UTC m=+3724.426205126" watchObservedRunningTime="2025-12-02 11:10:00.235445879 +0000 UTC m=+3724.430620181" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.251919 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fc4d895dc-zfl87" podStartSLOduration=3.32563677 podStartE2EDuration="16.251899186s" podCreationTimestamp="2025-12-02 11:09:44 +0000 UTC" firstStartedPulling="2025-12-02 11:09:45.927545123 +0000 UTC m=+3710.122719425" lastFinishedPulling="2025-12-02 11:09:58.853807539 +0000 UTC m=+3723.048981841" observedRunningTime="2025-12-02 11:10:00.251539406 +0000 UTC m=+3724.446713708" watchObservedRunningTime="2025-12-02 11:10:00.251899186 +0000 UTC m=+3724.447073498" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.283502 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77cfc9896b-llw2g" podStartSLOduration=3.416427316 podStartE2EDuration="13.283480622s" podCreationTimestamp="2025-12-02 11:09:47 +0000 UTC" firstStartedPulling="2025-12-02 11:09:48.913894235 +0000 UTC m=+3713.109068537" lastFinishedPulling="2025-12-02 11:09:58.780947541 +0000 UTC m=+3722.976121843" observedRunningTime="2025-12-02 11:10:00.278411008 +0000 UTC m=+3724.473585330" watchObservedRunningTime="2025-12-02 11:10:00.283480622 +0000 UTC m=+3724.478654934" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.929128 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.929215 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.964026 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.974251 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.974299 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 11:10:00 crc kubenswrapper[4813]: I1202 11:10:00.981755 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 11:10:01 crc kubenswrapper[4813]: I1202 11:10:01.020490 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 11:10:01 crc kubenswrapper[4813]: I1202 11:10:01.021832 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 11:10:01 crc kubenswrapper[4813]: I1202 11:10:01.205896 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 11:10:01 crc kubenswrapper[4813]: I1202 11:10:01.206263 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 11:10:01 crc kubenswrapper[4813]: I1202 11:10:01.206276 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 11:10:01 crc kubenswrapper[4813]: I1202 11:10:01.206285 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 11:10:03 crc kubenswrapper[4813]: I1202 11:10:03.219570 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 11:10:03 crc kubenswrapper[4813]: I1202 11:10:03.219870 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 11:10:03 crc kubenswrapper[4813]: I1202 11:10:03.642121 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 11:10:03 crc kubenswrapper[4813]: I1202 11:10:03.642220 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 11:10:03 crc kubenswrapper[4813]: I1202 11:10:03.651153 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 11:10:03 crc kubenswrapper[4813]: I1202 11:10:03.670640 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 11:10:03 crc kubenswrapper[4813]: I1202 11:10:03.673368 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 11:10:04 crc kubenswrapper[4813]: I1202 11:10:04.907727 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:10:05 crc kubenswrapper[4813]: I1202 11:10:05.129086 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:10:07 crc kubenswrapper[4813]: I1202 11:10:07.269140 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p5xnm" event={"ID":"17bfd912-d75b-48af-8433-1b5d24d50856","Type":"ContainerStarted","Data":"031eee95c4f032617bad83ea80f169d43b557160bf91e66593b999253cb701aa"} Dec 02 11:10:07 crc kubenswrapper[4813]: I1202 11:10:07.299797 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-p5xnm" podStartSLOduration=6.499276246 podStartE2EDuration="13.299772493s" podCreationTimestamp="2025-12-02 11:09:54 +0000 UTC" firstStartedPulling="2025-12-02 11:09:59.248225262 +0000 UTC m=+3723.443399564" lastFinishedPulling="2025-12-02 11:10:06.048721509 +0000 UTC m=+3730.243895811" observedRunningTime="2025-12-02 11:10:07.287459644 +0000 UTC m=+3731.482633946" watchObservedRunningTime="2025-12-02 11:10:07.299772493 +0000 UTC m=+3731.494946805" Dec 02 11:10:07 crc kubenswrapper[4813]: I1202 11:10:07.976450 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:10:07 crc kubenswrapper[4813]: I1202 11:10:07.977960 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:10:08 crc kubenswrapper[4813]: I1202 11:10:08.431198 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:10:08 crc kubenswrapper[4813]: I1202 11:10:08.436188 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:10:11 crc kubenswrapper[4813]: I1202 11:10:11.068819 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:10:11 crc kubenswrapper[4813]: E1202 11:10:11.069588 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:10:17 crc kubenswrapper[4813]: I1202 11:10:17.980453 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77cfc9896b-llw2g" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Dec 02 11:10:18 crc kubenswrapper[4813]: I1202 11:10:18.375903 4813 generic.go:334] "Generic (PLEG): container finished" podID="17bfd912-d75b-48af-8433-1b5d24d50856" containerID="031eee95c4f032617bad83ea80f169d43b557160bf91e66593b999253cb701aa" exitCode=0 Dec 02 11:10:18 crc kubenswrapper[4813]: I1202 11:10:18.375990 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p5xnm" event={"ID":"17bfd912-d75b-48af-8433-1b5d24d50856","Type":"ContainerDied","Data":"031eee95c4f032617bad83ea80f169d43b557160bf91e66593b999253cb701aa"} Dec 02 11:10:18 crc kubenswrapper[4813]: I1202 11:10:18.431330 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c5ddb87c8-5vtbk" podUID="757d290c-ab26-4557-a758-10924585a86b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.776136 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p5xnm" Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.901572 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-combined-ca-bundle\") pod \"17bfd912-d75b-48af-8433-1b5d24d50856\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.901807 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx5ld\" (UniqueName: \"kubernetes.io/projected/17bfd912-d75b-48af-8433-1b5d24d50856-kube-api-access-qx5ld\") pod \"17bfd912-d75b-48af-8433-1b5d24d50856\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.901933 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-config-data\") pod \"17bfd912-d75b-48af-8433-1b5d24d50856\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.901962 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-job-config-data\") pod \"17bfd912-d75b-48af-8433-1b5d24d50856\" (UID: \"17bfd912-d75b-48af-8433-1b5d24d50856\") " Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.907744 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "17bfd912-d75b-48af-8433-1b5d24d50856" (UID: "17bfd912-d75b-48af-8433-1b5d24d50856"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.908296 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bfd912-d75b-48af-8433-1b5d24d50856-kube-api-access-qx5ld" (OuterVolumeSpecName: "kube-api-access-qx5ld") pod "17bfd912-d75b-48af-8433-1b5d24d50856" (UID: "17bfd912-d75b-48af-8433-1b5d24d50856"). InnerVolumeSpecName "kube-api-access-qx5ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.910691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-config-data" (OuterVolumeSpecName: "config-data") pod "17bfd912-d75b-48af-8433-1b5d24d50856" (UID: "17bfd912-d75b-48af-8433-1b5d24d50856"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:19 crc kubenswrapper[4813]: I1202 11:10:19.930719 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17bfd912-d75b-48af-8433-1b5d24d50856" (UID: "17bfd912-d75b-48af-8433-1b5d24d50856"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.003536 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx5ld\" (UniqueName: \"kubernetes.io/projected/17bfd912-d75b-48af-8433-1b5d24d50856-kube-api-access-qx5ld\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.003573 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.003583 4813 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-job-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.003592 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bfd912-d75b-48af-8433-1b5d24d50856-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.395809 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p5xnm" event={"ID":"17bfd912-d75b-48af-8433-1b5d24d50856","Type":"ContainerDied","Data":"51e22542af66771795f7f22706a7059b298959e9d55b5e040ca6754605d708e7"} Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.395858 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e22542af66771795f7f22706a7059b298959e9d55b5e040ca6754605d708e7" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.395903 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p5xnm" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.756925 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:20 crc kubenswrapper[4813]: E1202 11:10:20.768644 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bfd912-d75b-48af-8433-1b5d24d50856" containerName="manila-db-sync" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.768682 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bfd912-d75b-48af-8433-1b5d24d50856" containerName="manila-db-sync" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.771129 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bfd912-d75b-48af-8433-1b5d24d50856" containerName="manila-db-sync" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.772237 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.772328 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.780278 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.780422 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-228b7" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.780559 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.780767 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.802787 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.804730 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.811302 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.823676 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924110 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924195 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924224 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-scripts\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924294 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924376 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-scripts\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924433 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924485 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924557 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sr9m\" (UniqueName: \"kubernetes.io/projected/a43cce0b-8a24-4057-afd7-951858e734aa-kube-api-access-9sr9m\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924634 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924693 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjdj\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-kube-api-access-qxjdj\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924728 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-ceph\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.924867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a43cce0b-8a24-4057-afd7-951858e734aa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.925247 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.980494 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-s24ls"] Dec 02 11:10:20 crc kubenswrapper[4813]: I1202 11:10:20.989365 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.027798 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.027851 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.027880 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-scripts\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.027911 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.027959 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-scripts\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.027985 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.028026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.028055 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sr9m\" (UniqueName: \"kubernetes.io/projected/a43cce0b-8a24-4057-afd7-951858e734aa-kube-api-access-9sr9m\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.028114 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.028135 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjdj\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-kube-api-access-qxjdj\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.028163 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-ceph\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.028202 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.028225 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a43cce0b-8a24-4057-afd7-951858e734aa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.028267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.030206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.034591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.034652 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a43cce0b-8a24-4057-afd7-951858e734aa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.043320 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-ceph\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.057140 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.057399 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-scripts\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.057447 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-s24ls"] Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.058009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.060686 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.061083 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.071841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sr9m\" (UniqueName: \"kubernetes.io/projected/a43cce0b-8a24-4057-afd7-951858e734aa-kube-api-access-9sr9m\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.075897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-scripts\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.076482 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.079119 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data\") pod \"manila-scheduler-0\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.079933 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjdj\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-kube-api-access-qxjdj\") pod \"manila-share-share1-0\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.106830 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.119146 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.122181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.135977 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.136395 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.161337 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.161548 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-config\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.161585 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.161684 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.161728 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8gh\" (UniqueName: \"kubernetes.io/projected/1518b626-2bab-4d9b-8572-f6fae0f49bea-kube-api-access-sx8gh\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.161769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.237526 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.302736 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b5ed9c-c961-4bfe-ae40-7587e42eca15-logs\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.302835 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.303010 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-scripts\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.303127 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.303377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-config\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.303416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.303876 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.303931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.303964 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx8gh\" (UniqueName: \"kubernetes.io/projected/1518b626-2bab-4d9b-8572-f6fae0f49bea-kube-api-access-sx8gh\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.303991 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.304108 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28b5ed9c-c961-4bfe-ae40-7587e42eca15-etc-machine-id\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.304145 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data-custom\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.304397 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.304429 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.304839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s486\" (UniqueName: \"kubernetes.io/projected/28b5ed9c-c961-4bfe-ae40-7587e42eca15-kube-api-access-8s486\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.325749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.326739 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.327158 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1518b626-2bab-4d9b-8572-f6fae0f49bea-config\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.355425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx8gh\" (UniqueName: \"kubernetes.io/projected/1518b626-2bab-4d9b-8572-f6fae0f49bea-kube-api-access-sx8gh\") pod \"dnsmasq-dns-76b5fdb995-s24ls\" (UID: \"1518b626-2bab-4d9b-8572-f6fae0f49bea\") " pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.428787 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data-custom\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.429992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s486\" (UniqueName: \"kubernetes.io/projected/28b5ed9c-c961-4bfe-ae40-7587e42eca15-kube-api-access-8s486\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.430206 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b5ed9c-c961-4bfe-ae40-7587e42eca15-logs\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.430311 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-scripts\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.430389 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.430923 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b5ed9c-c961-4bfe-ae40-7587e42eca15-logs\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.431058 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.431211 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28b5ed9c-c961-4bfe-ae40-7587e42eca15-etc-machine-id\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.431696 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28b5ed9c-c961-4bfe-ae40-7587e42eca15-etc-machine-id\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.447577 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data-custom\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.459948 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.477374 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s486\" (UniqueName: \"kubernetes.io/projected/28b5ed9c-c961-4bfe-ae40-7587e42eca15-kube-api-access-8s486\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.477809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.478022 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-scripts\") pod \"manila-api-0\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.635040 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.657627 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.808188 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:21 crc kubenswrapper[4813]: I1202 11:10:21.967022 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:22 crc kubenswrapper[4813]: I1202 11:10:22.375668 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-s24ls"] Dec 02 11:10:22 crc kubenswrapper[4813]: I1202 11:10:22.425359 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:22 crc kubenswrapper[4813]: I1202 11:10:22.436603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a43cce0b-8a24-4057-afd7-951858e734aa","Type":"ContainerStarted","Data":"6f8146d14471e5d256e6074bfa1540a1f572232b9485e511c83d1142c0c22abe"} Dec 02 11:10:22 crc kubenswrapper[4813]: I1202 11:10:22.439582 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d69799cd-9912-4861-b4a3-ea8214fc5530","Type":"ContainerStarted","Data":"fcdf7ff12870c286861d84edc05129e02b8d4b796642917d85c4caf0551ce08d"} Dec 02 11:10:22 crc kubenswrapper[4813]: W1202 11:10:22.476653 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1518b626_2bab_4d9b_8572_f6fae0f49bea.slice/crio-a4164c8c89f800b91b61f34b4e5be3ecfdd887584ccaf50fc4613dd7642d1274 WatchSource:0}: Error finding container a4164c8c89f800b91b61f34b4e5be3ecfdd887584ccaf50fc4613dd7642d1274: Status 404 returned error can't find the container with id a4164c8c89f800b91b61f34b4e5be3ecfdd887584ccaf50fc4613dd7642d1274 Dec 02 11:10:23 crc kubenswrapper[4813]: I1202 11:10:23.068923 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:10:23 crc kubenswrapper[4813]: E1202 11:10:23.069711 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:10:23 crc kubenswrapper[4813]: I1202 11:10:23.514280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a43cce0b-8a24-4057-afd7-951858e734aa","Type":"ContainerStarted","Data":"31e5c8aad5c6856346bef5f8304b055cc01865f764522fbf506afb8c55b90bda"} Dec 02 11:10:23 crc kubenswrapper[4813]: I1202 11:10:23.535554 4813 generic.go:334] "Generic (PLEG): container finished" podID="1518b626-2bab-4d9b-8572-f6fae0f49bea" containerID="3231afdd699267930ad899abb98e5be71f15728a1cc1fff9c8ca238a4b2acef4" exitCode=0 Dec 02 11:10:23 crc kubenswrapper[4813]: I1202 11:10:23.536593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" event={"ID":"1518b626-2bab-4d9b-8572-f6fae0f49bea","Type":"ContainerDied","Data":"3231afdd699267930ad899abb98e5be71f15728a1cc1fff9c8ca238a4b2acef4"} Dec 02 11:10:23 crc kubenswrapper[4813]: I1202 11:10:23.536639 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" event={"ID":"1518b626-2bab-4d9b-8572-f6fae0f49bea","Type":"ContainerStarted","Data":"a4164c8c89f800b91b61f34b4e5be3ecfdd887584ccaf50fc4613dd7642d1274"} Dec 02 11:10:23 crc kubenswrapper[4813]: I1202 11:10:23.552810 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"28b5ed9c-c961-4bfe-ae40-7587e42eca15","Type":"ContainerStarted","Data":"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558"} Dec 02 11:10:23 crc kubenswrapper[4813]: I1202 11:10:23.552861 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"28b5ed9c-c961-4bfe-ae40-7587e42eca15","Type":"ContainerStarted","Data":"f394feccd45fce02ba07eb8b9ec22853ee798230f2b244da81638dfb3f45db16"} Dec 02 11:10:23 crc kubenswrapper[4813]: I1202 11:10:23.923117 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.565866 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" event={"ID":"1518b626-2bab-4d9b-8572-f6fae0f49bea","Type":"ContainerStarted","Data":"8ee9ac086869b7c73ba0b76ea9b2a7d1554b480eadcb6430af1d88d90b2fd4d9"} Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.566231 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.569453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"28b5ed9c-c961-4bfe-ae40-7587e42eca15","Type":"ContainerStarted","Data":"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402"} Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.569561 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerName="manila-api-log" containerID="cri-o://42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558" gracePeriod=30 Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.569580 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerName="manila-api" containerID="cri-o://f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402" gracePeriod=30 Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.569566 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.580099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a43cce0b-8a24-4057-afd7-951858e734aa","Type":"ContainerStarted","Data":"829be3ea947d63a15a615353879f135a20246d538e211836a9a653b23cd55e9f"} Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.607404 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" podStartSLOduration=4.607385211 podStartE2EDuration="4.607385211s" podCreationTimestamp="2025-12-02 11:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:10:24.597549912 +0000 UTC m=+3748.792724214" watchObservedRunningTime="2025-12-02 11:10:24.607385211 +0000 UTC m=+3748.802559503" Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.639663 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.639641966 podStartE2EDuration="3.639641966s" podCreationTimestamp="2025-12-02 11:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:10:24.61476823 +0000 UTC m=+3748.809942532" watchObservedRunningTime="2025-12-02 11:10:24.639641966 +0000 UTC m=+3748.834816258" Dec 02 11:10:24 crc kubenswrapper[4813]: I1202 11:10:24.651956 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.875272653 podStartE2EDuration="4.651934625s" podCreationTimestamp="2025-12-02 11:10:20 +0000 UTC" firstStartedPulling="2025-12-02 11:10:21.818833242 +0000 UTC m=+3746.014007544" lastFinishedPulling="2025-12-02 11:10:22.595495214 +0000 UTC m=+3746.790669516" observedRunningTime="2025-12-02 11:10:24.642522368 +0000 UTC m=+3748.837696670" watchObservedRunningTime="2025-12-02 11:10:24.651934625 +0000 UTC m=+3748.847108927" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.231186 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.353523 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-combined-ca-bundle\") pod \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.353583 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b5ed9c-c961-4bfe-ae40-7587e42eca15-logs\") pod \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.353639 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-scripts\") pod \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.353669 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s486\" (UniqueName: \"kubernetes.io/projected/28b5ed9c-c961-4bfe-ae40-7587e42eca15-kube-api-access-8s486\") pod \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.353735 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28b5ed9c-c961-4bfe-ae40-7587e42eca15-etc-machine-id\") pod \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.353772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data-custom\") pod \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.353957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data\") pod \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\" (UID: \"28b5ed9c-c961-4bfe-ae40-7587e42eca15\") " Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.354188 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28b5ed9c-c961-4bfe-ae40-7587e42eca15-logs" (OuterVolumeSpecName: "logs") pod "28b5ed9c-c961-4bfe-ae40-7587e42eca15" (UID: "28b5ed9c-c961-4bfe-ae40-7587e42eca15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.354291 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28b5ed9c-c961-4bfe-ae40-7587e42eca15-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "28b5ed9c-c961-4bfe-ae40-7587e42eca15" (UID: "28b5ed9c-c961-4bfe-ae40-7587e42eca15"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.354955 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b5ed9c-c961-4bfe-ae40-7587e42eca15-logs\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.354979 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28b5ed9c-c961-4bfe-ae40-7587e42eca15-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.359924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-scripts" (OuterVolumeSpecName: "scripts") pod "28b5ed9c-c961-4bfe-ae40-7587e42eca15" (UID: "28b5ed9c-c961-4bfe-ae40-7587e42eca15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.360008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b5ed9c-c961-4bfe-ae40-7587e42eca15-kube-api-access-8s486" (OuterVolumeSpecName: "kube-api-access-8s486") pod "28b5ed9c-c961-4bfe-ae40-7587e42eca15" (UID: "28b5ed9c-c961-4bfe-ae40-7587e42eca15"). InnerVolumeSpecName "kube-api-access-8s486". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.377968 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "28b5ed9c-c961-4bfe-ae40-7587e42eca15" (UID: "28b5ed9c-c961-4bfe-ae40-7587e42eca15"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.443318 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28b5ed9c-c961-4bfe-ae40-7587e42eca15" (UID: "28b5ed9c-c961-4bfe-ae40-7587e42eca15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.454281 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data" (OuterVolumeSpecName: "config-data") pod "28b5ed9c-c961-4bfe-ae40-7587e42eca15" (UID: "28b5ed9c-c961-4bfe-ae40-7587e42eca15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.456432 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.456469 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.456480 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.456491 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b5ed9c-c961-4bfe-ae40-7587e42eca15-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.456501 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s486\" (UniqueName: \"kubernetes.io/projected/28b5ed9c-c961-4bfe-ae40-7587e42eca15-kube-api-access-8s486\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.591900 4813 generic.go:334] "Generic (PLEG): container finished" podID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerID="f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402" exitCode=0 Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.591938 4813 generic.go:334] "Generic (PLEG): container finished" podID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerID="42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558" exitCode=143 Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.592960 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.593248 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"28b5ed9c-c961-4bfe-ae40-7587e42eca15","Type":"ContainerDied","Data":"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402"} Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.593317 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"28b5ed9c-c961-4bfe-ae40-7587e42eca15","Type":"ContainerDied","Data":"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558"} Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.593328 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"28b5ed9c-c961-4bfe-ae40-7587e42eca15","Type":"ContainerDied","Data":"f394feccd45fce02ba07eb8b9ec22853ee798230f2b244da81638dfb3f45db16"} Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.593346 4813 scope.go:117] "RemoveContainer" containerID="f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.622161 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.625269 4813 scope.go:117] "RemoveContainer" containerID="42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.633317 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.658118 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:25 crc kubenswrapper[4813]: E1202 11:10:25.658522 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerName="manila-api" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.658539 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerName="manila-api" Dec 02 11:10:25 crc kubenswrapper[4813]: E1202 11:10:25.658568 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerName="manila-api-log" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.658575 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerName="manila-api-log" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.658785 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerName="manila-api-log" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.658802 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" containerName="manila-api" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.659833 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.664355 4813 scope.go:117] "RemoveContainer" containerID="f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.664717 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 02 11:10:25 crc kubenswrapper[4813]: E1202 11:10:25.665203 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402\": container with ID starting with f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402 not found: ID does not exist" containerID="f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.665374 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402"} err="failed to get container status \"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402\": rpc error: code = NotFound desc = could not find container \"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402\": container with ID starting with f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402 not found: ID does not exist" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.666939 4813 scope.go:117] "RemoveContainer" containerID="42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.666287 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.667390 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Dec 02 11:10:25 crc kubenswrapper[4813]: E1202 11:10:25.667513 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558\": container with ID starting with 42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558 not found: ID does not exist" containerID="42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.667553 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558"} err="failed to get container status \"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558\": rpc error: code = NotFound desc = could not find container \"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558\": container with ID starting with 42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558 not found: ID does not exist" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.667585 4813 scope.go:117] "RemoveContainer" containerID="f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.667907 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402"} err="failed to get container status \"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402\": rpc error: code = NotFound desc = could not find container \"f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402\": container with ID starting with f41e7ee73bc958623512f77de3f226978f09d196d01c2f7d9474a02ae4ab9402 not found: ID does not exist" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.668096 4813 scope.go:117] "RemoveContainer" containerID="42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.668421 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558"} err="failed to get container status \"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558\": rpc error: code = NotFound desc = could not find container \"42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558\": container with ID starting with 42ccf4cbca6848b03caf4a2a27a11fa58710fc5b3374f774b45e1bdeaef44558 not found: ID does not exist" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.673521 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw5ws\" (UniqueName: \"kubernetes.io/projected/5aee8529-5e7c-4f43-b683-ada4d72cebe4-kube-api-access-vw5ws\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762588 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-config-data\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762632 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-internal-tls-certs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762678 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aee8529-5e7c-4f43-b683-ada4d72cebe4-logs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762712 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-public-tls-certs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5aee8529-5e7c-4f43-b683-ada4d72cebe4-etc-machine-id\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762827 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-config-data-custom\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762861 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-scripts\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.762877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.866645 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-internal-tls-certs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.866701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aee8529-5e7c-4f43-b683-ada4d72cebe4-logs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.866728 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-public-tls-certs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.866795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5aee8529-5e7c-4f43-b683-ada4d72cebe4-etc-machine-id\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.866815 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-config-data-custom\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.866994 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-scripts\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.867010 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.867034 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw5ws\" (UniqueName: \"kubernetes.io/projected/5aee8529-5e7c-4f43-b683-ada4d72cebe4-kube-api-access-vw5ws\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.867374 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aee8529-5e7c-4f43-b683-ada4d72cebe4-logs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.866860 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5aee8529-5e7c-4f43-b683-ada4d72cebe4-etc-machine-id\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.867935 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-config-data\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.870499 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-scripts\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.871121 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-config-data-custom\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.872054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.872608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-config-data\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.879643 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-internal-tls-certs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.880196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aee8529-5e7c-4f43-b683-ada4d72cebe4-public-tls-certs\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.886194 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw5ws\" (UniqueName: \"kubernetes.io/projected/5aee8529-5e7c-4f43-b683-ada4d72cebe4-kube-api-access-vw5ws\") pod \"manila-api-0\" (UID: \"5aee8529-5e7c-4f43-b683-ada4d72cebe4\") " pod="openstack/manila-api-0" Dec 02 11:10:25 crc kubenswrapper[4813]: I1202 11:10:25.987962 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 02 11:10:26 crc kubenswrapper[4813]: I1202 11:10:26.082696 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b5ed9c-c961-4bfe-ae40-7587e42eca15" path="/var/lib/kubelet/pods/28b5ed9c-c961-4bfe-ae40-7587e42eca15/volumes" Dec 02 11:10:27 crc kubenswrapper[4813]: I1202 11:10:27.317563 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.104842 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.107430 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.146122 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.204676 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.252848 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-logs\") pod \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.252913 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-config-data\") pod \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.253124 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk9wd\" (UniqueName: \"kubernetes.io/projected/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-kube-api-access-wk9wd\") pod \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.253269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-scripts\") pod \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.253284 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-logs" (OuterVolumeSpecName: "logs") pod "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" (UID: "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.253340 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-horizon-secret-key\") pod \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\" (UID: \"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.254677 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-logs\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.258019 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-kube-api-access-wk9wd" (OuterVolumeSpecName: "kube-api-access-wk9wd") pod "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" (UID: "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936"). InnerVolumeSpecName "kube-api-access-wk9wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.258452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" (UID: "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.298461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-scripts" (OuterVolumeSpecName: "scripts") pod "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" (UID: "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.311011 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-config-data" (OuterVolumeSpecName: "config-data") pod "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" (UID: "aeda5ec3-5da8-4aca-9ecf-c8cc4c352936"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.356246 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk9wd\" (UniqueName: \"kubernetes.io/projected/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-kube-api-access-wk9wd\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.356648 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.356662 4813 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.356673 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.410455 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.458229 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-logs\") pod \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.458305 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcgfb\" (UniqueName: \"kubernetes.io/projected/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-kube-api-access-bcgfb\") pod \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.458343 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-scripts\") pod \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.458363 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-config-data\") pod \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.458475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-horizon-secret-key\") pod \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\" (UID: \"d521e7fa-e5b4-4fd3-8882-27af5fb803b3\") " Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.459420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-logs" (OuterVolumeSpecName: "logs") pod "d521e7fa-e5b4-4fd3-8882-27af5fb803b3" (UID: "d521e7fa-e5b4-4fd3-8882-27af5fb803b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.462567 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-kube-api-access-bcgfb" (OuterVolumeSpecName: "kube-api-access-bcgfb") pod "d521e7fa-e5b4-4fd3-8882-27af5fb803b3" (UID: "d521e7fa-e5b4-4fd3-8882-27af5fb803b3"). InnerVolumeSpecName "kube-api-access-bcgfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.463952 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d521e7fa-e5b4-4fd3-8882-27af5fb803b3" (UID: "d521e7fa-e5b4-4fd3-8882-27af5fb803b3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.489430 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-config-data" (OuterVolumeSpecName: "config-data") pod "d521e7fa-e5b4-4fd3-8882-27af5fb803b3" (UID: "d521e7fa-e5b4-4fd3-8882-27af5fb803b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.501255 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-scripts" (OuterVolumeSpecName: "scripts") pod "d521e7fa-e5b4-4fd3-8882-27af5fb803b3" (UID: "d521e7fa-e5b4-4fd3-8882-27af5fb803b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.561504 4813 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.561534 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-logs\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.561556 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcgfb\" (UniqueName: \"kubernetes.io/projected/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-kube-api-access-bcgfb\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.561567 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.561577 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d521e7fa-e5b4-4fd3-8882-27af5fb803b3-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.637316 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-s24ls" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.680962 4813 generic.go:334] "Generic (PLEG): container finished" podID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerID="208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7" exitCode=137 Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.681214 4813 generic.go:334] "Generic (PLEG): container finished" podID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerID="5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00" exitCode=137 Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.688360 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fc4d895dc-zfl87" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.688436 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fc4d895dc-zfl87" event={"ID":"d521e7fa-e5b4-4fd3-8882-27af5fb803b3","Type":"ContainerDied","Data":"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.688514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fc4d895dc-zfl87" event={"ID":"d521e7fa-e5b4-4fd3-8882-27af5fb803b3","Type":"ContainerDied","Data":"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.688533 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fc4d895dc-zfl87" event={"ID":"d521e7fa-e5b4-4fd3-8882-27af5fb803b3","Type":"ContainerDied","Data":"d2d4b329d104bdf831ad97c69dee5f7f54c5e28990b243b545ce75817eb8cac4"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.688564 4813 scope.go:117] "RemoveContainer" containerID="208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.711807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d69799cd-9912-4861-b4a3-ea8214fc5530","Type":"ContainerStarted","Data":"46cdf49a3c7f05a906560615fedc9c290cb06bc2db556f7c229180164a28fec1"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.718363 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5aee8529-5e7c-4f43-b683-ada4d72cebe4","Type":"ContainerStarted","Data":"c3a01597aef753879a950b5a5d060172a1075b09a25694065c14104db92489d5"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.718406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5aee8529-5e7c-4f43-b683-ada4d72cebe4","Type":"ContainerStarted","Data":"bb1947f1f17c1b2d6fa1f2a259c092825612bcf66515abd7d5478979d07314f2"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.721771 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d89848547-h6sc5" event={"ID":"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936","Type":"ContainerDied","Data":"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.721851 4813 generic.go:334] "Generic (PLEG): container finished" podID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerID="b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a" exitCode=137 Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.721917 4813 generic.go:334] "Generic (PLEG): container finished" podID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerID="c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4" exitCode=137 Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.721923 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d89848547-h6sc5" Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.721950 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d89848547-h6sc5" event={"ID":"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936","Type":"ContainerDied","Data":"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.721998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d89848547-h6sc5" event={"ID":"aeda5ec3-5da8-4aca-9ecf-c8cc4c352936","Type":"ContainerDied","Data":"6561edfebff343eb0a20b98307f0db66b984c0bae90fedd26e7317d89a7608a1"} Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.748053 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-fs65h"] Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.748527 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" podUID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerName="dnsmasq-dns" containerID="cri-o://131a5e3b3a7aa79aac0552e395e734a073fa34e3f6df6b152c22f73691f91d8d" gracePeriod=10 Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.789520 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fc4d895dc-zfl87"] Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.809605 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fc4d895dc-zfl87"] Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.825772 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d89848547-h6sc5"] Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.835570 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d89848547-h6sc5"] Dec 02 11:10:31 crc kubenswrapper[4813]: I1202 11:10:31.993573 4813 scope.go:117] "RemoveContainer" containerID="5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.084904 4813 scope.go:117] "RemoveContainer" containerID="208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7" Dec 02 11:10:32 crc kubenswrapper[4813]: E1202 11:10:32.085281 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7\": container with ID starting with 208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7 not found: ID does not exist" containerID="208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.085313 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7"} err="failed to get container status \"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7\": rpc error: code = NotFound desc = could not find container \"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7\": container with ID starting with 208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7 not found: ID does not exist" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.085331 4813 scope.go:117] "RemoveContainer" containerID="5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00" Dec 02 11:10:32 crc kubenswrapper[4813]: E1202 11:10:32.085522 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00\": container with ID starting with 5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00 not found: ID does not exist" containerID="5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.085545 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00"} err="failed to get container status \"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00\": rpc error: code = NotFound desc = could not find container \"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00\": container with ID starting with 5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00 not found: ID does not exist" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.085558 4813 scope.go:117] "RemoveContainer" containerID="208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.087095 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7"} err="failed to get container status \"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7\": rpc error: code = NotFound desc = could not find container \"208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7\": container with ID starting with 208198b76a10e49aee551b2e6874ccab99368d37dc7a1448b891905eb2d428c7 not found: ID does not exist" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.087120 4813 scope.go:117] "RemoveContainer" containerID="5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.089342 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" path="/var/lib/kubelet/pods/aeda5ec3-5da8-4aca-9ecf-c8cc4c352936/volumes" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.090191 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" path="/var/lib/kubelet/pods/d521e7fa-e5b4-4fd3-8882-27af5fb803b3/volumes" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.092915 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00"} err="failed to get container status \"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00\": rpc error: code = NotFound desc = could not find container \"5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00\": container with ID starting with 5c9ac2c272901c63ed1f6542d41d09e601bea7b71e4d419140c93789de9a9a00 not found: ID does not exist" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.092968 4813 scope.go:117] "RemoveContainer" containerID="b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.320265 4813 scope.go:117] "RemoveContainer" containerID="c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.391416 4813 scope.go:117] "RemoveContainer" containerID="b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a" Dec 02 11:10:32 crc kubenswrapper[4813]: E1202 11:10:32.397307 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a\": container with ID starting with b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a not found: ID does not exist" containerID="b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.397368 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a"} err="failed to get container status \"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a\": rpc error: code = NotFound desc = could not find container \"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a\": container with ID starting with b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a not found: ID does not exist" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.397399 4813 scope.go:117] "RemoveContainer" containerID="c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4" Dec 02 11:10:32 crc kubenswrapper[4813]: E1202 11:10:32.397917 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4\": container with ID starting with c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4 not found: ID does not exist" containerID="c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.397958 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4"} err="failed to get container status \"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4\": rpc error: code = NotFound desc = could not find container \"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4\": container with ID starting with c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4 not found: ID does not exist" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.397990 4813 scope.go:117] "RemoveContainer" containerID="b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.398222 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a"} err="failed to get container status \"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a\": rpc error: code = NotFound desc = could not find container \"b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a\": container with ID starting with b4efefe24bc252d31078185617cbf29686fb3cdddcc60fcb7dc82ed9b9763b0a not found: ID does not exist" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.398237 4813 scope.go:117] "RemoveContainer" containerID="c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.398389 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4"} err="failed to get container status \"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4\": rpc error: code = NotFound desc = could not find container \"c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4\": container with ID starting with c5bca779990439b60b1fa450a92fa6b4d5e02733785fde13e644380377484ad4 not found: ID does not exist" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.734500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"5aee8529-5e7c-4f43-b683-ada4d72cebe4","Type":"ContainerStarted","Data":"bd9175f5345aecea690c7974038c6ef4b54760e53af455d1d2477842c6a85ec6"} Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.736146 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.741121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" event={"ID":"888bdb9c-7436-47d1-b240-71ceb68bd6f1","Type":"ContainerDied","Data":"131a5e3b3a7aa79aac0552e395e734a073fa34e3f6df6b152c22f73691f91d8d"} Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.741146 4813 generic.go:334] "Generic (PLEG): container finished" podID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerID="131a5e3b3a7aa79aac0552e395e734a073fa34e3f6df6b152c22f73691f91d8d" exitCode=0 Dec 02 11:10:32 crc kubenswrapper[4813]: I1202 11:10:32.769322 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=7.769297039 podStartE2EDuration="7.769297039s" podCreationTimestamp="2025-12-02 11:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:10:32.754973043 +0000 UTC m=+3756.950147365" watchObservedRunningTime="2025-12-02 11:10:32.769297039 +0000 UTC m=+3756.964471341" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.003787 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.134675 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-sb\") pod \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.134719 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-dns-svc\") pod \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.134757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-config\") pod \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.134823 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvczh\" (UniqueName: \"kubernetes.io/projected/888bdb9c-7436-47d1-b240-71ceb68bd6f1-kube-api-access-mvczh\") pod \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.134966 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-nb\") pod \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.135037 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-openstack-edpm-ipam\") pod \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.147338 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888bdb9c-7436-47d1-b240-71ceb68bd6f1-kube-api-access-mvczh" (OuterVolumeSpecName: "kube-api-access-mvczh") pod "888bdb9c-7436-47d1-b240-71ceb68bd6f1" (UID: "888bdb9c-7436-47d1-b240-71ceb68bd6f1"). InnerVolumeSpecName "kube-api-access-mvczh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.212261 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "888bdb9c-7436-47d1-b240-71ceb68bd6f1" (UID: "888bdb9c-7436-47d1-b240-71ceb68bd6f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.228502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "888bdb9c-7436-47d1-b240-71ceb68bd6f1" (UID: "888bdb9c-7436-47d1-b240-71ceb68bd6f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.236960 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-config" (OuterVolumeSpecName: "config") pod "888bdb9c-7436-47d1-b240-71ceb68bd6f1" (UID: "888bdb9c-7436-47d1-b240-71ceb68bd6f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.237285 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-config\") pod \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\" (UID: \"888bdb9c-7436-47d1-b240-71ceb68bd6f1\") " Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.237723 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvczh\" (UniqueName: \"kubernetes.io/projected/888bdb9c-7436-47d1-b240-71ceb68bd6f1-kube-api-access-mvczh\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.237740 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.237748 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:33 crc kubenswrapper[4813]: W1202 11:10:33.237790 4813 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/888bdb9c-7436-47d1-b240-71ceb68bd6f1/volumes/kubernetes.io~configmap/config Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.237809 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-config" (OuterVolumeSpecName: "config") pod "888bdb9c-7436-47d1-b240-71ceb68bd6f1" (UID: "888bdb9c-7436-47d1-b240-71ceb68bd6f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.239491 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "888bdb9c-7436-47d1-b240-71ceb68bd6f1" (UID: "888bdb9c-7436-47d1-b240-71ceb68bd6f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.253949 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c5ddb87c8-5vtbk" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.257651 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "888bdb9c-7436-47d1-b240-71ceb68bd6f1" (UID: "888bdb9c-7436-47d1-b240-71ceb68bd6f1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.331829 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77cfc9896b-llw2g"] Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.332156 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77cfc9896b-llw2g" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" containerID="cri-o://ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73" gracePeriod=30 Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.338064 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77cfc9896b-llw2g" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon-log" containerID="cri-o://9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3" gracePeriod=30 Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.339199 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.339234 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-config\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.339245 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888bdb9c-7436-47d1-b240-71ceb68bd6f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.384165 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77cfc9896b-llw2g" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.755735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d69799cd-9912-4861-b4a3-ea8214fc5530","Type":"ContainerStarted","Data":"c0b84fac87801d50b1748d6a1c4ff52a92bbb28f836409161a74538750acc7db"} Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.759128 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.761531 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" event={"ID":"888bdb9c-7436-47d1-b240-71ceb68bd6f1","Type":"ContainerDied","Data":"20c8b5df90118f876bdbfcb32add85da17f51e3a6c43bd52df1a8b43d532c0f4"} Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.761607 4813 scope.go:117] "RemoveContainer" containerID="131a5e3b3a7aa79aac0552e395e734a073fa34e3f6df6b152c22f73691f91d8d" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.785200 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.894140811 podStartE2EDuration="13.785179656s" podCreationTimestamp="2025-12-02 11:10:20 +0000 UTC" firstStartedPulling="2025-12-02 11:10:21.971359941 +0000 UTC m=+3746.166534243" lastFinishedPulling="2025-12-02 11:10:30.862398786 +0000 UTC m=+3755.057573088" observedRunningTime="2025-12-02 11:10:33.779666019 +0000 UTC m=+3757.974840321" watchObservedRunningTime="2025-12-02 11:10:33.785179656 +0000 UTC m=+3757.980353958" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.814367 4813 scope.go:117] "RemoveContainer" containerID="1caaae268d9f24e4e4bc653fa4240222be3620b33e9e407537337c67fe92df01" Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.821575 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-fs65h"] Dec 02 11:10:33 crc kubenswrapper[4813]: I1202 11:10:33.836465 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-fs65h"] Dec 02 11:10:34 crc kubenswrapper[4813]: I1202 11:10:34.078526 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" path="/var/lib/kubelet/pods/888bdb9c-7436-47d1-b240-71ceb68bd6f1/volumes" Dec 02 11:10:34 crc kubenswrapper[4813]: I1202 11:10:34.708104 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 11:10:34 crc kubenswrapper[4813]: I1202 11:10:34.708467 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="ceilometer-central-agent" containerID="cri-o://ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a" gracePeriod=30 Dec 02 11:10:34 crc kubenswrapper[4813]: I1202 11:10:34.708614 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="proxy-httpd" containerID="cri-o://b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538" gracePeriod=30 Dec 02 11:10:34 crc kubenswrapper[4813]: I1202 11:10:34.708662 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="sg-core" containerID="cri-o://cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7" gracePeriod=30 Dec 02 11:10:34 crc kubenswrapper[4813]: I1202 11:10:34.708703 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="ceilometer-notification-agent" containerID="cri-o://2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c" gracePeriod=30 Dec 02 11:10:35 crc kubenswrapper[4813]: I1202 11:10:35.783452 4813 generic.go:334] "Generic (PLEG): container finished" podID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerID="b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538" exitCode=0 Dec 02 11:10:35 crc kubenswrapper[4813]: I1202 11:10:35.783968 4813 generic.go:334] "Generic (PLEG): container finished" podID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerID="cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7" exitCode=2 Dec 02 11:10:35 crc kubenswrapper[4813]: I1202 11:10:35.783978 4813 generic.go:334] "Generic (PLEG): container finished" podID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerID="ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a" exitCode=0 Dec 02 11:10:35 crc kubenswrapper[4813]: I1202 11:10:35.783530 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerDied","Data":"b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538"} Dec 02 11:10:35 crc kubenswrapper[4813]: I1202 11:10:35.784034 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerDied","Data":"cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7"} Dec 02 11:10:35 crc kubenswrapper[4813]: I1202 11:10:35.784055 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerDied","Data":"ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a"} Dec 02 11:10:36 crc kubenswrapper[4813]: I1202 11:10:36.075557 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:10:36 crc kubenswrapper[4813]: E1202 11:10:36.075830 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:10:36 crc kubenswrapper[4813]: I1202 11:10:36.735583 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77cfc9896b-llw2g" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:49798->10.217.0.246:8443: read: connection reset by peer" Dec 02 11:10:36 crc kubenswrapper[4813]: I1202 11:10:36.795109 4813 generic.go:334] "Generic (PLEG): container finished" podID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerID="ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73" exitCode=0 Dec 02 11:10:36 crc kubenswrapper[4813]: I1202 11:10:36.795146 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77cfc9896b-llw2g" event={"ID":"783caf3f-632f-4ee5-9ace-b9337879d5c0","Type":"ContainerDied","Data":"ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73"} Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.488757 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.620563 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-run-httpd\") pod \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.620632 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj78r\" (UniqueName: \"kubernetes.io/projected/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-kube-api-access-nj78r\") pod \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.620716 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-scripts\") pod \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.620784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-sg-core-conf-yaml\") pod \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.620855 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-log-httpd\") pod \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.620919 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-config-data\") pod \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.620987 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-ceilometer-tls-certs\") pod \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.621017 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-combined-ca-bundle\") pod \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\" (UID: \"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05\") " Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.620847 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" (UID: "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.622243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" (UID: "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.622887 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-864d5fc68c-fs65h" podUID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.196:5353: i/o timeout" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.627458 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-kube-api-access-nj78r" (OuterVolumeSpecName: "kube-api-access-nj78r") pod "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" (UID: "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05"). InnerVolumeSpecName "kube-api-access-nj78r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.639877 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-scripts" (OuterVolumeSpecName: "scripts") pod "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" (UID: "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.653684 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" (UID: "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.670924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" (UID: "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.720391 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" (UID: "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.723691 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.723721 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.723734 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.723743 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.723751 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj78r\" (UniqueName: \"kubernetes.io/projected/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-kube-api-access-nj78r\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.723760 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.723767 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.735236 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-config-data" (OuterVolumeSpecName: "config-data") pod "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" (UID: "9fd36d3a-869b-4bc9-95e3-cabbc3c55d05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.806683 4813 generic.go:334] "Generic (PLEG): container finished" podID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerID="2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c" exitCode=0 Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.806724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerDied","Data":"2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c"} Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.806751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9fd36d3a-869b-4bc9-95e3-cabbc3c55d05","Type":"ContainerDied","Data":"965a4c7de37836743455ec967128991a4350894c9f9564d3a4b7ed0a96c557ed"} Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.806768 4813 scope.go:117] "RemoveContainer" containerID="b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.806890 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.826004 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.847896 4813 scope.go:117] "RemoveContainer" containerID="cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.848695 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.867015 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.883157 4813 scope.go:117] "RemoveContainer" containerID="2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.885515 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886149 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="sg-core" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886167 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="sg-core" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886179 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerName="horizon-log" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886185 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerName="horizon-log" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886203 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerName="horizon-log" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886228 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerName="horizon-log" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886242 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="ceilometer-notification-agent" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886248 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="ceilometer-notification-agent" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886255 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerName="dnsmasq-dns" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886261 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerName="dnsmasq-dns" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886279 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="ceilometer-central-agent" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886286 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="ceilometer-central-agent" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886299 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerName="horizon" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886305 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerName="horizon" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886318 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="proxy-httpd" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886324 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="proxy-httpd" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886334 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerName="horizon" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886339 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerName="horizon" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.886348 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerName="init" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886355 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerName="init" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886526 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerName="horizon" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886538 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d521e7fa-e5b4-4fd3-8882-27af5fb803b3" containerName="horizon-log" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886545 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerName="horizon" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886555 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="proxy-httpd" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886563 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="ceilometer-notification-agent" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886572 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeda5ec3-5da8-4aca-9ecf-c8cc4c352936" containerName="horizon-log" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886585 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="888bdb9c-7436-47d1-b240-71ceb68bd6f1" containerName="dnsmasq-dns" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886592 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="ceilometer-central-agent" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.886604 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" containerName="sg-core" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.888742 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.897188 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.897445 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.897665 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.900060 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.925813 4813 scope.go:117] "RemoveContainer" containerID="ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.949715 4813 scope.go:117] "RemoveContainer" containerID="b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.950184 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538\": container with ID starting with b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538 not found: ID does not exist" containerID="b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.950281 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538"} err="failed to get container status \"b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538\": rpc error: code = NotFound desc = could not find container \"b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538\": container with ID starting with b257f59dd95a37000c851f7704c0a87db2a3c76f2e159d23b67676cb35b51538 not found: ID does not exist" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.950363 4813 scope.go:117] "RemoveContainer" containerID="cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.950650 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7\": container with ID starting with cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7 not found: ID does not exist" containerID="cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.950727 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7"} err="failed to get container status \"cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7\": rpc error: code = NotFound desc = could not find container \"cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7\": container with ID starting with cf4dccc0b9e1392773e279c6bd0a16fc0052def76999278dce8a05719ec7c3b7 not found: ID does not exist" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.950790 4813 scope.go:117] "RemoveContainer" containerID="2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.951280 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c\": container with ID starting with 2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c not found: ID does not exist" containerID="2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.951396 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c"} err="failed to get container status \"2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c\": rpc error: code = NotFound desc = could not find container \"2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c\": container with ID starting with 2e06bac778bc9a168ad8bcb40feb96d5f519bd01204c70d2f6429bb839fc429c not found: ID does not exist" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.951498 4813 scope.go:117] "RemoveContainer" containerID="ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a" Dec 02 11:10:37 crc kubenswrapper[4813]: E1202 11:10:37.951892 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a\": container with ID starting with ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a not found: ID does not exist" containerID="ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.951990 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a"} err="failed to get container status \"ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a\": rpc error: code = NotFound desc = could not find container \"ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a\": container with ID starting with ce3fcaf9fb4fc16e58920b743996897059c3317c36d15244485308cb18cefa6a not found: ID does not exist" Dec 02 11:10:37 crc kubenswrapper[4813]: I1202 11:10:37.977951 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77cfc9896b-llw2g" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.029504 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd503396-3ca3-46ca-850c-51717dc92ba4-log-httpd\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.029552 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.029638 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.029673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-scripts\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.029708 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbw5p\" (UniqueName: \"kubernetes.io/projected/dd503396-3ca3-46ca-850c-51717dc92ba4-kube-api-access-tbw5p\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.029774 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd503396-3ca3-46ca-850c-51717dc92ba4-run-httpd\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.029820 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-config-data\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.029896 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.080939 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd36d3a-869b-4bc9-95e3-cabbc3c55d05" path="/var/lib/kubelet/pods/9fd36d3a-869b-4bc9-95e3-cabbc3c55d05/volumes" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.131047 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbw5p\" (UniqueName: \"kubernetes.io/projected/dd503396-3ca3-46ca-850c-51717dc92ba4-kube-api-access-tbw5p\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.131142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd503396-3ca3-46ca-850c-51717dc92ba4-run-httpd\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.131189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-config-data\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.131245 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.131284 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd503396-3ca3-46ca-850c-51717dc92ba4-log-httpd\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.131303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.131339 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.131360 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-scripts\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.132357 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd503396-3ca3-46ca-850c-51717dc92ba4-run-httpd\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.132756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd503396-3ca3-46ca-850c-51717dc92ba4-log-httpd\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.136788 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-config-data\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.136979 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.137196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.138814 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-scripts\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.148033 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd503396-3ca3-46ca-850c-51717dc92ba4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.149012 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbw5p\" (UniqueName: \"kubernetes.io/projected/dd503396-3ca3-46ca-850c-51717dc92ba4-kube-api-access-tbw5p\") pod \"ceilometer-0\" (UID: \"dd503396-3ca3-46ca-850c-51717dc92ba4\") " pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.206216 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.672226 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 11:10:38 crc kubenswrapper[4813]: I1202 11:10:38.816971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd503396-3ca3-46ca-850c-51717dc92ba4","Type":"ContainerStarted","Data":"31e6f49607d71c91ec7bf0c3e90333434917995415a9b562c85e87f5cc5d01fe"} Dec 02 11:10:39 crc kubenswrapper[4813]: I1202 11:10:39.836522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd503396-3ca3-46ca-850c-51717dc92ba4","Type":"ContainerStarted","Data":"ecdd4804d5bcc1a84d2723ffb0afb8c70f55ee3d0f21ba4de2a6f8e2cf1713d3"} Dec 02 11:10:40 crc kubenswrapper[4813]: I1202 11:10:40.848565 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd503396-3ca3-46ca-850c-51717dc92ba4","Type":"ContainerStarted","Data":"9dee665ad95d1967ccb7efda35039428d0fec2b81abe8d67042ce709ba4d5e6d"} Dec 02 11:10:41 crc kubenswrapper[4813]: I1202 11:10:41.142259 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 02 11:10:41 crc kubenswrapper[4813]: I1202 11:10:41.872254 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd503396-3ca3-46ca-850c-51717dc92ba4","Type":"ContainerStarted","Data":"d0094ea3df7aa22729e502bd5bb577b980b4a256fd6d7faee154093290986bfd"} Dec 02 11:10:42 crc kubenswrapper[4813]: I1202 11:10:42.602063 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 02 11:10:42 crc kubenswrapper[4813]: I1202 11:10:42.669952 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:42 crc kubenswrapper[4813]: I1202 11:10:42.689265 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 02 11:10:42 crc kubenswrapper[4813]: I1202 11:10:42.754159 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:42 crc kubenswrapper[4813]: I1202 11:10:42.879832 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" containerName="manila-scheduler" containerID="cri-o://31e5c8aad5c6856346bef5f8304b055cc01865f764522fbf506afb8c55b90bda" gracePeriod=30 Dec 02 11:10:42 crc kubenswrapper[4813]: I1202 11:10:42.879973 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" containerName="probe" containerID="cri-o://829be3ea947d63a15a615353879f135a20246d538e211836a9a653b23cd55e9f" gracePeriod=30 Dec 02 11:10:42 crc kubenswrapper[4813]: I1202 11:10:42.880530 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerName="probe" containerID="cri-o://c0b84fac87801d50b1748d6a1c4ff52a92bbb28f836409161a74538750acc7db" gracePeriod=30 Dec 02 11:10:42 crc kubenswrapper[4813]: I1202 11:10:42.880520 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerName="manila-share" containerID="cri-o://46cdf49a3c7f05a906560615fedc9c290cb06bc2db556f7c229180164a28fec1" gracePeriod=30 Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.891804 4813 generic.go:334] "Generic (PLEG): container finished" podID="a43cce0b-8a24-4057-afd7-951858e734aa" containerID="829be3ea947d63a15a615353879f135a20246d538e211836a9a653b23cd55e9f" exitCode=0 Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.891855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a43cce0b-8a24-4057-afd7-951858e734aa","Type":"ContainerDied","Data":"829be3ea947d63a15a615353879f135a20246d538e211836a9a653b23cd55e9f"} Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.899714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd503396-3ca3-46ca-850c-51717dc92ba4","Type":"ContainerStarted","Data":"d7fe5d0b01be69ad60ca7631b58c678348c9bb539632781a9b30de67c123b26a"} Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.900248 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.902869 4813 generic.go:334] "Generic (PLEG): container finished" podID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerID="c0b84fac87801d50b1748d6a1c4ff52a92bbb28f836409161a74538750acc7db" exitCode=0 Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.902970 4813 generic.go:334] "Generic (PLEG): container finished" podID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerID="46cdf49a3c7f05a906560615fedc9c290cb06bc2db556f7c229180164a28fec1" exitCode=1 Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.903043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d69799cd-9912-4861-b4a3-ea8214fc5530","Type":"ContainerDied","Data":"c0b84fac87801d50b1748d6a1c4ff52a92bbb28f836409161a74538750acc7db"} Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.903142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d69799cd-9912-4861-b4a3-ea8214fc5530","Type":"ContainerDied","Data":"46cdf49a3c7f05a906560615fedc9c290cb06bc2db556f7c229180164a28fec1"} Dec 02 11:10:43 crc kubenswrapper[4813]: I1202 11:10:43.926550 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.038662883 podStartE2EDuration="6.926528096s" podCreationTimestamp="2025-12-02 11:10:37 +0000 UTC" firstStartedPulling="2025-12-02 11:10:38.686866711 +0000 UTC m=+3762.882041003" lastFinishedPulling="2025-12-02 11:10:43.574731914 +0000 UTC m=+3767.769906216" observedRunningTime="2025-12-02 11:10:43.920203087 +0000 UTC m=+3768.115377459" watchObservedRunningTime="2025-12-02 11:10:43.926528096 +0000 UTC m=+3768.121702408" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.387851 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-var-lib-manila\") pod \"d69799cd-9912-4861-b4a3-ea8214fc5530\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479281 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "d69799cd-9912-4861-b4a3-ea8214fc5530" (UID: "d69799cd-9912-4861-b4a3-ea8214fc5530"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479551 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data\") pod \"d69799cd-9912-4861-b4a3-ea8214fc5530\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479602 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-combined-ca-bundle\") pod \"d69799cd-9912-4861-b4a3-ea8214fc5530\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-scripts\") pod \"d69799cd-9912-4861-b4a3-ea8214fc5530\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-ceph\") pod \"d69799cd-9912-4861-b4a3-ea8214fc5530\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxjdj\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-kube-api-access-qxjdj\") pod \"d69799cd-9912-4861-b4a3-ea8214fc5530\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data-custom\") pod \"d69799cd-9912-4861-b4a3-ea8214fc5530\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.479841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-etc-machine-id\") pod \"d69799cd-9912-4861-b4a3-ea8214fc5530\" (UID: \"d69799cd-9912-4861-b4a3-ea8214fc5530\") " Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.480024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d69799cd-9912-4861-b4a3-ea8214fc5530" (UID: "d69799cd-9912-4861-b4a3-ea8214fc5530"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.480344 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.480360 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d69799cd-9912-4861-b4a3-ea8214fc5530-var-lib-manila\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.486236 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-ceph" (OuterVolumeSpecName: "ceph") pod "d69799cd-9912-4861-b4a3-ea8214fc5530" (UID: "d69799cd-9912-4861-b4a3-ea8214fc5530"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.487868 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-kube-api-access-qxjdj" (OuterVolumeSpecName: "kube-api-access-qxjdj") pod "d69799cd-9912-4861-b4a3-ea8214fc5530" (UID: "d69799cd-9912-4861-b4a3-ea8214fc5530"). InnerVolumeSpecName "kube-api-access-qxjdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.490243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-scripts" (OuterVolumeSpecName: "scripts") pod "d69799cd-9912-4861-b4a3-ea8214fc5530" (UID: "d69799cd-9912-4861-b4a3-ea8214fc5530"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.498659 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d69799cd-9912-4861-b4a3-ea8214fc5530" (UID: "d69799cd-9912-4861-b4a3-ea8214fc5530"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.537244 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d69799cd-9912-4861-b4a3-ea8214fc5530" (UID: "d69799cd-9912-4861-b4a3-ea8214fc5530"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.582701 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.582739 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.582752 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.582762 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-ceph\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.582773 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxjdj\" (UniqueName: \"kubernetes.io/projected/d69799cd-9912-4861-b4a3-ea8214fc5530-kube-api-access-qxjdj\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.592208 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data" (OuterVolumeSpecName: "config-data") pod "d69799cd-9912-4861-b4a3-ea8214fc5530" (UID: "d69799cd-9912-4861-b4a3-ea8214fc5530"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.684793 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69799cd-9912-4861-b4a3-ea8214fc5530-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.914398 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.914400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d69799cd-9912-4861-b4a3-ea8214fc5530","Type":"ContainerDied","Data":"fcdf7ff12870c286861d84edc05129e02b8d4b796642917d85c4caf0551ce08d"} Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.916323 4813 scope.go:117] "RemoveContainer" containerID="c0b84fac87801d50b1748d6a1c4ff52a92bbb28f836409161a74538750acc7db" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.983043 4813 scope.go:117] "RemoveContainer" containerID="46cdf49a3c7f05a906560615fedc9c290cb06bc2db556f7c229180164a28fec1" Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.990178 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:44 crc kubenswrapper[4813]: I1202 11:10:44.998644 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.024827 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:45 crc kubenswrapper[4813]: E1202 11:10:45.025191 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerName="probe" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.025206 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerName="probe" Dec 02 11:10:45 crc kubenswrapper[4813]: E1202 11:10:45.025240 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerName="manila-share" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.025245 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerName="manila-share" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.025410 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerName="manila-share" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.025427 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" containerName="probe" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.032263 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.043499 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.070748 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.093726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/99110d27-ba93-4f75-a898-acf87c7b4f14-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.093790 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-scripts\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.093806 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.093830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.093849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99110d27-ba93-4f75-a898-acf87c7b4f14-ceph\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.093873 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-config-data\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.093916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9pp\" (UniqueName: \"kubernetes.io/projected/99110d27-ba93-4f75-a898-acf87c7b4f14-kube-api-access-ml9pp\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.093958 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99110d27-ba93-4f75-a898-acf87c7b4f14-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.205924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9pp\" (UniqueName: \"kubernetes.io/projected/99110d27-ba93-4f75-a898-acf87c7b4f14-kube-api-access-ml9pp\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.210927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99110d27-ba93-4f75-a898-acf87c7b4f14-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.211049 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99110d27-ba93-4f75-a898-acf87c7b4f14-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.211173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/99110d27-ba93-4f75-a898-acf87c7b4f14-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.211235 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-scripts\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.211255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.211309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.211332 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99110d27-ba93-4f75-a898-acf87c7b4f14-ceph\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.211380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-config-data\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.211395 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/99110d27-ba93-4f75-a898-acf87c7b4f14-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.215321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99110d27-ba93-4f75-a898-acf87c7b4f14-ceph\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.215589 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.217066 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.217379 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-scripts\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.217726 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99110d27-ba93-4f75-a898-acf87c7b4f14-config-data\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.222040 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9pp\" (UniqueName: \"kubernetes.io/projected/99110d27-ba93-4f75-a898-acf87c7b4f14-kube-api-access-ml9pp\") pod \"manila-share-share1-0\" (UID: \"99110d27-ba93-4f75-a898-acf87c7b4f14\") " pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.365871 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 02 11:10:45 crc kubenswrapper[4813]: I1202 11:10:45.953981 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 02 11:10:46 crc kubenswrapper[4813]: I1202 11:10:46.096920 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69799cd-9912-4861-b4a3-ea8214fc5530" path="/var/lib/kubelet/pods/d69799cd-9912-4861-b4a3-ea8214fc5530/volumes" Dec 02 11:10:46 crc kubenswrapper[4813]: I1202 11:10:46.932708 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"99110d27-ba93-4f75-a898-acf87c7b4f14","Type":"ContainerStarted","Data":"b067c88b1892cfbac194756d8cd8613d7f2e2a3c254a6b361954caefa36a08f6"} Dec 02 11:10:46 crc kubenswrapper[4813]: I1202 11:10:46.933294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"99110d27-ba93-4f75-a898-acf87c7b4f14","Type":"ContainerStarted","Data":"159695f1e972d764fbd17117ca522142ee4646e84d0ca9180bfe326196571fdc"} Dec 02 11:10:46 crc kubenswrapper[4813]: I1202 11:10:46.933312 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"99110d27-ba93-4f75-a898-acf87c7b4f14","Type":"ContainerStarted","Data":"972f10ba390ffdcd4413464529225b463496a9bef919c5073e4c1d49186080b8"} Dec 02 11:10:46 crc kubenswrapper[4813]: I1202 11:10:46.962601 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.962582427 podStartE2EDuration="2.962582427s" podCreationTimestamp="2025-12-02 11:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:10:46.955126356 +0000 UTC m=+3771.150300658" watchObservedRunningTime="2025-12-02 11:10:46.962582427 +0000 UTC m=+3771.157756729" Dec 02 11:10:47 crc kubenswrapper[4813]: I1202 11:10:47.431533 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Dec 02 11:10:47 crc kubenswrapper[4813]: I1202 11:10:47.942399 4813 generic.go:334] "Generic (PLEG): container finished" podID="a43cce0b-8a24-4057-afd7-951858e734aa" containerID="31e5c8aad5c6856346bef5f8304b055cc01865f764522fbf506afb8c55b90bda" exitCode=0 Dec 02 11:10:47 crc kubenswrapper[4813]: I1202 11:10:47.942549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a43cce0b-8a24-4057-afd7-951858e734aa","Type":"ContainerDied","Data":"31e5c8aad5c6856346bef5f8304b055cc01865f764522fbf506afb8c55b90bda"} Dec 02 11:10:47 crc kubenswrapper[4813]: I1202 11:10:47.977461 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77cfc9896b-llw2g" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.070914 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.170917 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a43cce0b-8a24-4057-afd7-951858e734aa-etc-machine-id\") pod \"a43cce0b-8a24-4057-afd7-951858e734aa\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.171018 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sr9m\" (UniqueName: \"kubernetes.io/projected/a43cce0b-8a24-4057-afd7-951858e734aa-kube-api-access-9sr9m\") pod \"a43cce0b-8a24-4057-afd7-951858e734aa\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.171111 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a43cce0b-8a24-4057-afd7-951858e734aa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a43cce0b-8a24-4057-afd7-951858e734aa" (UID: "a43cce0b-8a24-4057-afd7-951858e734aa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.171140 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data-custom\") pod \"a43cce0b-8a24-4057-afd7-951858e734aa\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.171290 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-scripts\") pod \"a43cce0b-8a24-4057-afd7-951858e734aa\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.171323 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-combined-ca-bundle\") pod \"a43cce0b-8a24-4057-afd7-951858e734aa\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.171387 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data\") pod \"a43cce0b-8a24-4057-afd7-951858e734aa\" (UID: \"a43cce0b-8a24-4057-afd7-951858e734aa\") " Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.173283 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a43cce0b-8a24-4057-afd7-951858e734aa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.176825 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-scripts" (OuterVolumeSpecName: "scripts") pod "a43cce0b-8a24-4057-afd7-951858e734aa" (UID: "a43cce0b-8a24-4057-afd7-951858e734aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.176981 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43cce0b-8a24-4057-afd7-951858e734aa-kube-api-access-9sr9m" (OuterVolumeSpecName: "kube-api-access-9sr9m") pod "a43cce0b-8a24-4057-afd7-951858e734aa" (UID: "a43cce0b-8a24-4057-afd7-951858e734aa"). InnerVolumeSpecName "kube-api-access-9sr9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.182572 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a43cce0b-8a24-4057-afd7-951858e734aa" (UID: "a43cce0b-8a24-4057-afd7-951858e734aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.225895 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a43cce0b-8a24-4057-afd7-951858e734aa" (UID: "a43cce0b-8a24-4057-afd7-951858e734aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.276214 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sr9m\" (UniqueName: \"kubernetes.io/projected/a43cce0b-8a24-4057-afd7-951858e734aa-kube-api-access-9sr9m\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.276262 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.276272 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.276280 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.291031 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data" (OuterVolumeSpecName: "config-data") pod "a43cce0b-8a24-4057-afd7-951858e734aa" (UID: "a43cce0b-8a24-4057-afd7-951858e734aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.378337 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43cce0b-8a24-4057-afd7-951858e734aa-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.958599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a43cce0b-8a24-4057-afd7-951858e734aa","Type":"ContainerDied","Data":"6f8146d14471e5d256e6074bfa1540a1f572232b9485e511c83d1142c0c22abe"} Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.958667 4813 scope.go:117] "RemoveContainer" containerID="829be3ea947d63a15a615353879f135a20246d538e211836a9a653b23cd55e9f" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.958865 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.984719 4813 scope.go:117] "RemoveContainer" containerID="31e5c8aad5c6856346bef5f8304b055cc01865f764522fbf506afb8c55b90bda" Dec 02 11:10:48 crc kubenswrapper[4813]: I1202 11:10:48.995054 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.005409 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.027732 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:49 crc kubenswrapper[4813]: E1202 11:10:49.028237 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" containerName="probe" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.028264 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" containerName="probe" Dec 02 11:10:49 crc kubenswrapper[4813]: E1202 11:10:49.028294 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" containerName="manila-scheduler" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.028303 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" containerName="manila-scheduler" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.028515 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" containerName="probe" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.028539 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" containerName="manila-scheduler" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.029579 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.032414 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.041044 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.090232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.090278 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-config-data\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.090501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/113670c8-595f-41d3-8af4-47f7cc0a6833-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.090751 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgfcq\" (UniqueName: \"kubernetes.io/projected/113670c8-595f-41d3-8af4-47f7cc0a6833-kube-api-access-sgfcq\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.090781 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.090862 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-scripts\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.192454 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-scripts\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.193614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.194129 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-config-data\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.194829 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/113670c8-595f-41d3-8af4-47f7cc0a6833-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.194883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/113670c8-595f-41d3-8af4-47f7cc0a6833-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.194971 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgfcq\" (UniqueName: \"kubernetes.io/projected/113670c8-595f-41d3-8af4-47f7cc0a6833-kube-api-access-sgfcq\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.194995 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.198509 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.198771 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.202381 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-config-data\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.204393 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113670c8-595f-41d3-8af4-47f7cc0a6833-scripts\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.224009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgfcq\" (UniqueName: \"kubernetes.io/projected/113670c8-595f-41d3-8af4-47f7cc0a6833-kube-api-access-sgfcq\") pod \"manila-scheduler-0\" (UID: \"113670c8-595f-41d3-8af4-47f7cc0a6833\") " pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.345576 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.792386 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 02 11:10:49 crc kubenswrapper[4813]: I1202 11:10:49.976163 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"113670c8-595f-41d3-8af4-47f7cc0a6833","Type":"ContainerStarted","Data":"8172a269fc6d723ad25306014c8954f1d7f922972e36598f316660f79d6b35fe"} Dec 02 11:10:50 crc kubenswrapper[4813]: I1202 11:10:50.068152 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:10:50 crc kubenswrapper[4813]: E1202 11:10:50.068468 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:10:50 crc kubenswrapper[4813]: I1202 11:10:50.077598 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43cce0b-8a24-4057-afd7-951858e734aa" path="/var/lib/kubelet/pods/a43cce0b-8a24-4057-afd7-951858e734aa/volumes" Dec 02 11:10:50 crc kubenswrapper[4813]: I1202 11:10:50.986231 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"113670c8-595f-41d3-8af4-47f7cc0a6833","Type":"ContainerStarted","Data":"66891c9c4b2b56f2d2e35497181688e6570e88f0bbbb883c3ae2597d9154ba5a"} Dec 02 11:10:51 crc kubenswrapper[4813]: I1202 11:10:51.995959 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"113670c8-595f-41d3-8af4-47f7cc0a6833","Type":"ContainerStarted","Data":"deb30d6b9f5a9d0e3d54ef40e8a9125f414007f9ac9e4872d941039a178fd5ec"} Dec 02 11:10:52 crc kubenswrapper[4813]: I1202 11:10:52.017745 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.017729073 podStartE2EDuration="4.017729073s" podCreationTimestamp="2025-12-02 11:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:10:52.015421987 +0000 UTC m=+3776.210596289" watchObservedRunningTime="2025-12-02 11:10:52.017729073 +0000 UTC m=+3776.212903375" Dec 02 11:10:55 crc kubenswrapper[4813]: I1202 11:10:55.366382 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 02 11:10:57 crc kubenswrapper[4813]: I1202 11:10:57.977890 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77cfc9896b-llw2g" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.246:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.246:8443: connect: connection refused" Dec 02 11:10:59 crc kubenswrapper[4813]: I1202 11:10:59.345692 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.719938 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.819639 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-secret-key\") pod \"783caf3f-632f-4ee5-9ace-b9337879d5c0\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.819756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-scripts\") pod \"783caf3f-632f-4ee5-9ace-b9337879d5c0\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.819848 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77l7g\" (UniqueName: \"kubernetes.io/projected/783caf3f-632f-4ee5-9ace-b9337879d5c0-kube-api-access-77l7g\") pod \"783caf3f-632f-4ee5-9ace-b9337879d5c0\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.819887 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-combined-ca-bundle\") pod \"783caf3f-632f-4ee5-9ace-b9337879d5c0\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.819932 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-tls-certs\") pod \"783caf3f-632f-4ee5-9ace-b9337879d5c0\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.820132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-config-data\") pod \"783caf3f-632f-4ee5-9ace-b9337879d5c0\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.820212 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783caf3f-632f-4ee5-9ace-b9337879d5c0-logs\") pod \"783caf3f-632f-4ee5-9ace-b9337879d5c0\" (UID: \"783caf3f-632f-4ee5-9ace-b9337879d5c0\") " Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.820672 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783caf3f-632f-4ee5-9ace-b9337879d5c0-logs" (OuterVolumeSpecName: "logs") pod "783caf3f-632f-4ee5-9ace-b9337879d5c0" (UID: "783caf3f-632f-4ee5-9ace-b9337879d5c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.821952 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783caf3f-632f-4ee5-9ace-b9337879d5c0-logs\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.825753 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783caf3f-632f-4ee5-9ace-b9337879d5c0-kube-api-access-77l7g" (OuterVolumeSpecName: "kube-api-access-77l7g") pod "783caf3f-632f-4ee5-9ace-b9337879d5c0" (UID: "783caf3f-632f-4ee5-9ace-b9337879d5c0"). InnerVolumeSpecName "kube-api-access-77l7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.826158 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "783caf3f-632f-4ee5-9ace-b9337879d5c0" (UID: "783caf3f-632f-4ee5-9ace-b9337879d5c0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.843943 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-scripts" (OuterVolumeSpecName: "scripts") pod "783caf3f-632f-4ee5-9ace-b9337879d5c0" (UID: "783caf3f-632f-4ee5-9ace-b9337879d5c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.855698 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-config-data" (OuterVolumeSpecName: "config-data") pod "783caf3f-632f-4ee5-9ace-b9337879d5c0" (UID: "783caf3f-632f-4ee5-9ace-b9337879d5c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.873242 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "783caf3f-632f-4ee5-9ace-b9337879d5c0" (UID: "783caf3f-632f-4ee5-9ace-b9337879d5c0"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.873409 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "783caf3f-632f-4ee5-9ace-b9337879d5c0" (UID: "783caf3f-632f-4ee5-9ace-b9337879d5c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.924303 4813 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.924347 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.924361 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77l7g\" (UniqueName: \"kubernetes.io/projected/783caf3f-632f-4ee5-9ace-b9337879d5c0-kube-api-access-77l7g\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.924375 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.924386 4813 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/783caf3f-632f-4ee5-9ace-b9337879d5c0-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:03 crc kubenswrapper[4813]: I1202 11:11:03.924396 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/783caf3f-632f-4ee5-9ace-b9337879d5c0-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.069220 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:11:04 crc kubenswrapper[4813]: E1202 11:11:04.069716 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.119410 4813 generic.go:334] "Generic (PLEG): container finished" podID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerID="9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3" exitCode=137 Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.119461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77cfc9896b-llw2g" event={"ID":"783caf3f-632f-4ee5-9ace-b9337879d5c0","Type":"ContainerDied","Data":"9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3"} Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.119496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77cfc9896b-llw2g" event={"ID":"783caf3f-632f-4ee5-9ace-b9337879d5c0","Type":"ContainerDied","Data":"11acdd7e9bff8a992e5c598f21c773c0c8aaf8d8132a113ad58e533a98217120"} Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.119524 4813 scope.go:117] "RemoveContainer" containerID="ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73" Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.119549 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77cfc9896b-llw2g" Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.147238 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77cfc9896b-llw2g"] Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.155732 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77cfc9896b-llw2g"] Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.305939 4813 scope.go:117] "RemoveContainer" containerID="9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3" Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.327938 4813 scope.go:117] "RemoveContainer" containerID="ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73" Dec 02 11:11:04 crc kubenswrapper[4813]: E1202 11:11:04.328380 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73\": container with ID starting with ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73 not found: ID does not exist" containerID="ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73" Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.328424 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73"} err="failed to get container status \"ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73\": rpc error: code = NotFound desc = could not find container \"ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73\": container with ID starting with ecb8bbc54a96f1a5ab9a8b11aa4db68e05291745b985165e8399d31f3c074b73 not found: ID does not exist" Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.328454 4813 scope.go:117] "RemoveContainer" containerID="9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3" Dec 02 11:11:04 crc kubenswrapper[4813]: E1202 11:11:04.329183 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3\": container with ID starting with 9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3 not found: ID does not exist" containerID="9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3" Dec 02 11:11:04 crc kubenswrapper[4813]: I1202 11:11:04.329238 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3"} err="failed to get container status \"9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3\": rpc error: code = NotFound desc = could not find container \"9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3\": container with ID starting with 9bdef1c03dfef3fc6bb376def0b3a3b1eff04ffbe0cca5867032d6124e1d53f3 not found: ID does not exist" Dec 02 11:11:06 crc kubenswrapper[4813]: I1202 11:11:06.085317 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" path="/var/lib/kubelet/pods/783caf3f-632f-4ee5-9ace-b9337879d5c0/volumes" Dec 02 11:11:06 crc kubenswrapper[4813]: I1202 11:11:06.882736 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 02 11:11:08 crc kubenswrapper[4813]: I1202 11:11:08.214209 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 11:11:10 crc kubenswrapper[4813]: I1202 11:11:10.950394 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 02 11:11:19 crc kubenswrapper[4813]: I1202 11:11:19.067815 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:11:19 crc kubenswrapper[4813]: E1202 11:11:19.068671 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:11:34 crc kubenswrapper[4813]: I1202 11:11:34.068356 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:11:34 crc kubenswrapper[4813]: E1202 11:11:34.069400 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.337682 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-99pgl"] Dec 02 11:11:38 crc kubenswrapper[4813]: E1202 11:11:38.339421 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.339454 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" Dec 02 11:11:38 crc kubenswrapper[4813]: E1202 11:11:38.339488 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon-log" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.339545 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon-log" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.339964 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon-log" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.340039 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="783caf3f-632f-4ee5-9ace-b9337879d5c0" containerName="horizon" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.343381 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.357468 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99pgl"] Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.507676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-catalog-content\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.508001 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-utilities\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.508173 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdcrg\" (UniqueName: \"kubernetes.io/projected/16d05941-38c3-4d6a-862e-ced71e381fe9-kube-api-access-mdcrg\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.609886 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-catalog-content\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.609947 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-utilities\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.609998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdcrg\" (UniqueName: \"kubernetes.io/projected/16d05941-38c3-4d6a-862e-ced71e381fe9-kube-api-access-mdcrg\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.610688 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-catalog-content\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.610741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-utilities\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.631332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdcrg\" (UniqueName: \"kubernetes.io/projected/16d05941-38c3-4d6a-862e-ced71e381fe9-kube-api-access-mdcrg\") pod \"community-operators-99pgl\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:38 crc kubenswrapper[4813]: I1202 11:11:38.672537 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:39 crc kubenswrapper[4813]: I1202 11:11:39.248764 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99pgl"] Dec 02 11:11:39 crc kubenswrapper[4813]: W1202 11:11:39.261102 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d05941_38c3_4d6a_862e_ced71e381fe9.slice/crio-4237d869bf69eaf236473b35903d047a89094ed03f7a6b737eb1c0c02d1a337b WatchSource:0}: Error finding container 4237d869bf69eaf236473b35903d047a89094ed03f7a6b737eb1c0c02d1a337b: Status 404 returned error can't find the container with id 4237d869bf69eaf236473b35903d047a89094ed03f7a6b737eb1c0c02d1a337b Dec 02 11:11:39 crc kubenswrapper[4813]: I1202 11:11:39.864989 4813 generic.go:334] "Generic (PLEG): container finished" podID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerID="1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1" exitCode=0 Dec 02 11:11:39 crc kubenswrapper[4813]: I1202 11:11:39.865121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99pgl" event={"ID":"16d05941-38c3-4d6a-862e-ced71e381fe9","Type":"ContainerDied","Data":"1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1"} Dec 02 11:11:39 crc kubenswrapper[4813]: I1202 11:11:39.865175 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99pgl" event={"ID":"16d05941-38c3-4d6a-862e-ced71e381fe9","Type":"ContainerStarted","Data":"4237d869bf69eaf236473b35903d047a89094ed03f7a6b737eb1c0c02d1a337b"} Dec 02 11:11:40 crc kubenswrapper[4813]: I1202 11:11:40.879114 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99pgl" event={"ID":"16d05941-38c3-4d6a-862e-ced71e381fe9","Type":"ContainerStarted","Data":"b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22"} Dec 02 11:11:41 crc kubenswrapper[4813]: I1202 11:11:41.895760 4813 generic.go:334] "Generic (PLEG): container finished" podID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerID="b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22" exitCode=0 Dec 02 11:11:41 crc kubenswrapper[4813]: I1202 11:11:41.895857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99pgl" event={"ID":"16d05941-38c3-4d6a-862e-ced71e381fe9","Type":"ContainerDied","Data":"b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22"} Dec 02 11:11:43 crc kubenswrapper[4813]: I1202 11:11:43.915797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99pgl" event={"ID":"16d05941-38c3-4d6a-862e-ced71e381fe9","Type":"ContainerStarted","Data":"2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea"} Dec 02 11:11:43 crc kubenswrapper[4813]: I1202 11:11:43.935872 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-99pgl" podStartSLOduration=3.110907062 podStartE2EDuration="5.93584087s" podCreationTimestamp="2025-12-02 11:11:38 +0000 UTC" firstStartedPulling="2025-12-02 11:11:39.869925165 +0000 UTC m=+3824.065099507" lastFinishedPulling="2025-12-02 11:11:42.694859013 +0000 UTC m=+3826.890033315" observedRunningTime="2025-12-02 11:11:43.935239133 +0000 UTC m=+3828.130413465" watchObservedRunningTime="2025-12-02 11:11:43.93584087 +0000 UTC m=+3828.131015172" Dec 02 11:11:46 crc kubenswrapper[4813]: I1202 11:11:46.079833 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:11:46 crc kubenswrapper[4813]: E1202 11:11:46.083248 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:11:48 crc kubenswrapper[4813]: I1202 11:11:48.673681 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:48 crc kubenswrapper[4813]: I1202 11:11:48.674689 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:48 crc kubenswrapper[4813]: I1202 11:11:48.729907 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:49 crc kubenswrapper[4813]: I1202 11:11:49.024421 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:49 crc kubenswrapper[4813]: I1202 11:11:49.076550 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99pgl"] Dec 02 11:11:50 crc kubenswrapper[4813]: I1202 11:11:50.982476 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-99pgl" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerName="registry-server" containerID="cri-o://2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea" gracePeriod=2 Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.468923 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.490661 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdcrg\" (UniqueName: \"kubernetes.io/projected/16d05941-38c3-4d6a-862e-ced71e381fe9-kube-api-access-mdcrg\") pod \"16d05941-38c3-4d6a-862e-ced71e381fe9\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.490783 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-utilities\") pod \"16d05941-38c3-4d6a-862e-ced71e381fe9\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.490840 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-catalog-content\") pod \"16d05941-38c3-4d6a-862e-ced71e381fe9\" (UID: \"16d05941-38c3-4d6a-862e-ced71e381fe9\") " Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.497824 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-utilities" (OuterVolumeSpecName: "utilities") pod "16d05941-38c3-4d6a-862e-ced71e381fe9" (UID: "16d05941-38c3-4d6a-862e-ced71e381fe9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.501887 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d05941-38c3-4d6a-862e-ced71e381fe9-kube-api-access-mdcrg" (OuterVolumeSpecName: "kube-api-access-mdcrg") pod "16d05941-38c3-4d6a-862e-ced71e381fe9" (UID: "16d05941-38c3-4d6a-862e-ced71e381fe9"). InnerVolumeSpecName "kube-api-access-mdcrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.540569 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16d05941-38c3-4d6a-862e-ced71e381fe9" (UID: "16d05941-38c3-4d6a-862e-ced71e381fe9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.594026 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.594090 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdcrg\" (UniqueName: \"kubernetes.io/projected/16d05941-38c3-4d6a-862e-ced71e381fe9-kube-api-access-mdcrg\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.594109 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d05941-38c3-4d6a-862e-ced71e381fe9-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.997491 4813 generic.go:334] "Generic (PLEG): container finished" podID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerID="2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea" exitCode=0 Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.997860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99pgl" event={"ID":"16d05941-38c3-4d6a-862e-ced71e381fe9","Type":"ContainerDied","Data":"2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea"} Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.997895 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99pgl" event={"ID":"16d05941-38c3-4d6a-862e-ced71e381fe9","Type":"ContainerDied","Data":"4237d869bf69eaf236473b35903d047a89094ed03f7a6b737eb1c0c02d1a337b"} Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.997917 4813 scope.go:117] "RemoveContainer" containerID="2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea" Dec 02 11:11:51 crc kubenswrapper[4813]: I1202 11:11:51.998104 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99pgl" Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.034094 4813 scope.go:117] "RemoveContainer" containerID="b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22" Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.055316 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99pgl"] Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.077208 4813 scope.go:117] "RemoveContainer" containerID="1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1" Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.093202 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-99pgl"] Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.135906 4813 scope.go:117] "RemoveContainer" containerID="2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea" Dec 02 11:11:52 crc kubenswrapper[4813]: E1202 11:11:52.137208 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea\": container with ID starting with 2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea not found: ID does not exist" containerID="2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea" Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.137253 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea"} err="failed to get container status \"2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea\": rpc error: code = NotFound desc = could not find container \"2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea\": container with ID starting with 2e538ec8c4027b7a51930175858c23e1c10fc00a02e9e4edf42508eda9dd3cea not found: ID does not exist" Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.137302 4813 scope.go:117] "RemoveContainer" containerID="b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22" Dec 02 11:11:52 crc kubenswrapper[4813]: E1202 11:11:52.137785 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22\": container with ID starting with b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22 not found: ID does not exist" containerID="b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22" Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.137820 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22"} err="failed to get container status \"b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22\": rpc error: code = NotFound desc = could not find container \"b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22\": container with ID starting with b037f69986308006bb231d4fa7085793b531facfde551ec063ccdc5a5dfd3d22 not found: ID does not exist" Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.137841 4813 scope.go:117] "RemoveContainer" containerID="1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1" Dec 02 11:11:52 crc kubenswrapper[4813]: E1202 11:11:52.138536 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1\": container with ID starting with 1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1 not found: ID does not exist" containerID="1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1" Dec 02 11:11:52 crc kubenswrapper[4813]: I1202 11:11:52.138557 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1"} err="failed to get container status \"1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1\": rpc error: code = NotFound desc = could not find container \"1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1\": container with ID starting with 1ae2c454cf4639814e1c1cb25520db86a5958782e237cad5d8986de5ba6138d1 not found: ID does not exist" Dec 02 11:11:54 crc kubenswrapper[4813]: I1202 11:11:54.079582 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" path="/var/lib/kubelet/pods/16d05941-38c3-4d6a-862e-ced71e381fe9/volumes" Dec 02 11:12:01 crc kubenswrapper[4813]: I1202 11:12:01.068055 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:12:01 crc kubenswrapper[4813]: E1202 11:12:01.069060 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.925963 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 11:12:11 crc kubenswrapper[4813]: E1202 11:12:11.927231 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerName="extract-utilities" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.927258 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerName="extract-utilities" Dec 02 11:12:11 crc kubenswrapper[4813]: E1202 11:12:11.927320 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerName="extract-content" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.927333 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerName="extract-content" Dec 02 11:12:11 crc kubenswrapper[4813]: E1202 11:12:11.927358 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerName="registry-server" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.927372 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerName="registry-server" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.927670 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d05941-38c3-4d6a-862e-ced71e381fe9" containerName="registry-server" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.928676 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.931844 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.931844 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.932437 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.932821 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rrxgk" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.940533 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.994632 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-config-data\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.994676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:11 crc kubenswrapper[4813]: I1202 11:12:11.994814 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.068309 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:12:12 crc kubenswrapper[4813]: E1202 11:12:12.068745 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.096845 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.096926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.096999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-config-data\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.097066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.097143 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.097200 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jlr\" (UniqueName: \"kubernetes.io/projected/456994e2-7687-4a9b-be60-d172f26b11e4-kube-api-access-t5jlr\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.097252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.097296 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.097325 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.098690 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.098840 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-config-data\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.106698 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.199410 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5jlr\" (UniqueName: \"kubernetes.io/projected/456994e2-7687-4a9b-be60-d172f26b11e4-kube-api-access-t5jlr\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.199520 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.199581 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.199718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.199755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.199980 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.201017 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.201227 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.201545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.203777 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.205117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.216922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5jlr\" (UniqueName: \"kubernetes.io/projected/456994e2-7687-4a9b-be60-d172f26b11e4-kube-api-access-t5jlr\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.225791 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.253468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 11:12:12 crc kubenswrapper[4813]: I1202 11:12:12.750972 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 11:12:13 crc kubenswrapper[4813]: I1202 11:12:13.208133 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"456994e2-7687-4a9b-be60-d172f26b11e4","Type":"ContainerStarted","Data":"4b073681918dc493161c9ff759812e427ec4723a9c046a374fd74771dcfb515e"} Dec 02 11:12:25 crc kubenswrapper[4813]: I1202 11:12:25.068396 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:12:25 crc kubenswrapper[4813]: E1202 11:12:25.069306 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:12:40 crc kubenswrapper[4813]: I1202 11:12:40.067595 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:12:49 crc kubenswrapper[4813]: E1202 11:12:49.939665 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 02 11:12:49 crc kubenswrapper[4813]: E1202 11:12:49.940419 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5jlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(456994e2-7687-4a9b-be60-d172f26b11e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 11:12:49 crc kubenswrapper[4813]: E1202 11:12:49.941935 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="456994e2-7687-4a9b-be60-d172f26b11e4" Dec 02 11:12:50 crc kubenswrapper[4813]: I1202 11:12:50.579403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"31d0230e7c47a3ecf452dd08a1313f6b13c04ad9ab13733661560f4cc875a1fa"} Dec 02 11:12:50 crc kubenswrapper[4813]: E1202 11:12:50.580757 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="456994e2-7687-4a9b-be60-d172f26b11e4" Dec 02 11:13:05 crc kubenswrapper[4813]: I1202 11:13:05.070423 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:13:05 crc kubenswrapper[4813]: I1202 11:13:05.846479 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 02 11:13:07 crc kubenswrapper[4813]: I1202 11:13:07.742131 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"456994e2-7687-4a9b-be60-d172f26b11e4","Type":"ContainerStarted","Data":"8ef471e31c2042c1c724e49a6f776c768785c44154510e0d18148889195665b5"} Dec 02 11:13:56 crc kubenswrapper[4813]: I1202 11:13:56.759840 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=53.658782486 podStartE2EDuration="1m46.759814337s" podCreationTimestamp="2025-12-02 11:12:10 +0000 UTC" firstStartedPulling="2025-12-02 11:12:12.743367798 +0000 UTC m=+3856.938542100" lastFinishedPulling="2025-12-02 11:13:05.844399649 +0000 UTC m=+3910.039573951" observedRunningTime="2025-12-02 11:13:07.760606117 +0000 UTC m=+3911.955780439" watchObservedRunningTime="2025-12-02 11:13:56.759814337 +0000 UTC m=+3960.954988649" Dec 02 11:13:56 crc kubenswrapper[4813]: I1202 11:13:56.764501 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h2sw9"] Dec 02 11:13:56 crc kubenswrapper[4813]: I1202 11:13:56.767243 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:56 crc kubenswrapper[4813]: I1202 11:13:56.778897 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2sw9"] Dec 02 11:13:56 crc kubenswrapper[4813]: I1202 11:13:56.914668 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-utilities\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:56 crc kubenswrapper[4813]: I1202 11:13:56.914740 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-catalog-content\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:56 crc kubenswrapper[4813]: I1202 11:13:56.914768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8t8\" (UniqueName: \"kubernetes.io/projected/78aa7a2c-012b-4e15-84a6-6213e00759a4-kube-api-access-zd8t8\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:57 crc kubenswrapper[4813]: I1202 11:13:57.017039 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-catalog-content\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:57 crc kubenswrapper[4813]: I1202 11:13:57.017095 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd8t8\" (UniqueName: \"kubernetes.io/projected/78aa7a2c-012b-4e15-84a6-6213e00759a4-kube-api-access-zd8t8\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:57 crc kubenswrapper[4813]: I1202 11:13:57.017248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-utilities\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:57 crc kubenswrapper[4813]: I1202 11:13:57.017547 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-catalog-content\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:57 crc kubenswrapper[4813]: I1202 11:13:57.017628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-utilities\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:57 crc kubenswrapper[4813]: I1202 11:13:57.037383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd8t8\" (UniqueName: \"kubernetes.io/projected/78aa7a2c-012b-4e15-84a6-6213e00759a4-kube-api-access-zd8t8\") pod \"redhat-marketplace-h2sw9\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:57 crc kubenswrapper[4813]: I1202 11:13:57.089754 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:13:57 crc kubenswrapper[4813]: I1202 11:13:57.570190 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2sw9"] Dec 02 11:13:58 crc kubenswrapper[4813]: I1202 11:13:58.258693 4813 generic.go:334] "Generic (PLEG): container finished" podID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerID="09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1" exitCode=0 Dec 02 11:13:58 crc kubenswrapper[4813]: I1202 11:13:58.258809 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2sw9" event={"ID":"78aa7a2c-012b-4e15-84a6-6213e00759a4","Type":"ContainerDied","Data":"09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1"} Dec 02 11:13:58 crc kubenswrapper[4813]: I1202 11:13:58.259046 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2sw9" event={"ID":"78aa7a2c-012b-4e15-84a6-6213e00759a4","Type":"ContainerStarted","Data":"60d5301bc9d45be2f6ee316dfcbebf030f0781c42bde14ac8fcd4ee35fecd2a3"} Dec 02 11:14:00 crc kubenswrapper[4813]: I1202 11:14:00.282403 4813 generic.go:334] "Generic (PLEG): container finished" podID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerID="b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105" exitCode=0 Dec 02 11:14:00 crc kubenswrapper[4813]: I1202 11:14:00.282985 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2sw9" event={"ID":"78aa7a2c-012b-4e15-84a6-6213e00759a4","Type":"ContainerDied","Data":"b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105"} Dec 02 11:14:01 crc kubenswrapper[4813]: I1202 11:14:01.296260 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2sw9" event={"ID":"78aa7a2c-012b-4e15-84a6-6213e00759a4","Type":"ContainerStarted","Data":"a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f"} Dec 02 11:14:01 crc kubenswrapper[4813]: I1202 11:14:01.334607 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h2sw9" podStartSLOduration=2.695662952 podStartE2EDuration="5.334589115s" podCreationTimestamp="2025-12-02 11:13:56 +0000 UTC" firstStartedPulling="2025-12-02 11:13:58.260132376 +0000 UTC m=+3962.455306678" lastFinishedPulling="2025-12-02 11:14:00.899058519 +0000 UTC m=+3965.094232841" observedRunningTime="2025-12-02 11:14:01.320535516 +0000 UTC m=+3965.515709808" watchObservedRunningTime="2025-12-02 11:14:01.334589115 +0000 UTC m=+3965.529763417" Dec 02 11:14:07 crc kubenswrapper[4813]: I1202 11:14:07.092763 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:14:07 crc kubenswrapper[4813]: I1202 11:14:07.093454 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:14:07 crc kubenswrapper[4813]: I1202 11:14:07.146733 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:14:07 crc kubenswrapper[4813]: I1202 11:14:07.446108 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:14:07 crc kubenswrapper[4813]: I1202 11:14:07.502816 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2sw9"] Dec 02 11:14:09 crc kubenswrapper[4813]: I1202 11:14:09.394312 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h2sw9" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerName="registry-server" containerID="cri-o://a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f" gracePeriod=2 Dec 02 11:14:09 crc kubenswrapper[4813]: I1202 11:14:09.939804 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.082189 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-utilities\") pod \"78aa7a2c-012b-4e15-84a6-6213e00759a4\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.082369 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd8t8\" (UniqueName: \"kubernetes.io/projected/78aa7a2c-012b-4e15-84a6-6213e00759a4-kube-api-access-zd8t8\") pod \"78aa7a2c-012b-4e15-84a6-6213e00759a4\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.082831 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-catalog-content\") pod \"78aa7a2c-012b-4e15-84a6-6213e00759a4\" (UID: \"78aa7a2c-012b-4e15-84a6-6213e00759a4\") " Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.083296 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-utilities" (OuterVolumeSpecName: "utilities") pod "78aa7a2c-012b-4e15-84a6-6213e00759a4" (UID: "78aa7a2c-012b-4e15-84a6-6213e00759a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.083704 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.088311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78aa7a2c-012b-4e15-84a6-6213e00759a4-kube-api-access-zd8t8" (OuterVolumeSpecName: "kube-api-access-zd8t8") pod "78aa7a2c-012b-4e15-84a6-6213e00759a4" (UID: "78aa7a2c-012b-4e15-84a6-6213e00759a4"). InnerVolumeSpecName "kube-api-access-zd8t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.111450 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78aa7a2c-012b-4e15-84a6-6213e00759a4" (UID: "78aa7a2c-012b-4e15-84a6-6213e00759a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.185221 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78aa7a2c-012b-4e15-84a6-6213e00759a4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.185262 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd8t8\" (UniqueName: \"kubernetes.io/projected/78aa7a2c-012b-4e15-84a6-6213e00759a4-kube-api-access-zd8t8\") on node \"crc\" DevicePath \"\"" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.403660 4813 generic.go:334] "Generic (PLEG): container finished" podID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerID="a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f" exitCode=0 Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.403701 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2sw9" event={"ID":"78aa7a2c-012b-4e15-84a6-6213e00759a4","Type":"ContainerDied","Data":"a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f"} Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.403727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h2sw9" event={"ID":"78aa7a2c-012b-4e15-84a6-6213e00759a4","Type":"ContainerDied","Data":"60d5301bc9d45be2f6ee316dfcbebf030f0781c42bde14ac8fcd4ee35fecd2a3"} Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.403743 4813 scope.go:117] "RemoveContainer" containerID="a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.404923 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h2sw9" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.445941 4813 scope.go:117] "RemoveContainer" containerID="b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.448473 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2sw9"] Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.459428 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h2sw9"] Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.472257 4813 scope.go:117] "RemoveContainer" containerID="09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.536783 4813 scope.go:117] "RemoveContainer" containerID="a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f" Dec 02 11:14:10 crc kubenswrapper[4813]: E1202 11:14:10.537721 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f\": container with ID starting with a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f not found: ID does not exist" containerID="a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.537779 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f"} err="failed to get container status \"a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f\": rpc error: code = NotFound desc = could not find container \"a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f\": container with ID starting with a3acf2e77df4dac6a9ab5990c529f97d5a082b417e6a490f32308d7c358c0e9f not found: ID does not exist" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.537812 4813 scope.go:117] "RemoveContainer" containerID="b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105" Dec 02 11:14:10 crc kubenswrapper[4813]: E1202 11:14:10.538434 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105\": container with ID starting with b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105 not found: ID does not exist" containerID="b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.538502 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105"} err="failed to get container status \"b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105\": rpc error: code = NotFound desc = could not find container \"b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105\": container with ID starting with b57dee6ebc7efecc67b0cf5b0cf9b0c9e80dc7d6a79c32cac04846a24071a105 not found: ID does not exist" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.538522 4813 scope.go:117] "RemoveContainer" containerID="09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1" Dec 02 11:14:10 crc kubenswrapper[4813]: E1202 11:14:10.538828 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1\": container with ID starting with 09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1 not found: ID does not exist" containerID="09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1" Dec 02 11:14:10 crc kubenswrapper[4813]: I1202 11:14:10.538859 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1"} err="failed to get container status \"09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1\": rpc error: code = NotFound desc = could not find container \"09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1\": container with ID starting with 09ed72677505eb816be52800d431ffa78976c7f593023385ac283ed2a46d07f1 not found: ID does not exist" Dec 02 11:14:12 crc kubenswrapper[4813]: I1202 11:14:12.079846 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" path="/var/lib/kubelet/pods/78aa7a2c-012b-4e15-84a6-6213e00759a4/volumes" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.183491 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5"] Dec 02 11:15:00 crc kubenswrapper[4813]: E1202 11:15:00.184623 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerName="registry-server" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.184647 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerName="registry-server" Dec 02 11:15:00 crc kubenswrapper[4813]: E1202 11:15:00.184675 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerName="extract-utilities" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.184721 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerName="extract-utilities" Dec 02 11:15:00 crc kubenswrapper[4813]: E1202 11:15:00.184777 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerName="extract-content" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.184790 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerName="extract-content" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.185158 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="78aa7a2c-012b-4e15-84a6-6213e00759a4" containerName="registry-server" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.186241 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.189006 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.189113 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.195643 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5"] Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.347165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5gw\" (UniqueName: \"kubernetes.io/projected/90315498-62a4-4dd5-a96c-159135299510-kube-api-access-gx5gw\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.347475 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90315498-62a4-4dd5-a96c-159135299510-config-volume\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.347535 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90315498-62a4-4dd5-a96c-159135299510-secret-volume\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.449689 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5gw\" (UniqueName: \"kubernetes.io/projected/90315498-62a4-4dd5-a96c-159135299510-kube-api-access-gx5gw\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.449752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90315498-62a4-4dd5-a96c-159135299510-config-volume\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.450861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90315498-62a4-4dd5-a96c-159135299510-secret-volume\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.451765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90315498-62a4-4dd5-a96c-159135299510-config-volume\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.456531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90315498-62a4-4dd5-a96c-159135299510-secret-volume\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.466498 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5gw\" (UniqueName: \"kubernetes.io/projected/90315498-62a4-4dd5-a96c-159135299510-kube-api-access-gx5gw\") pod \"collect-profiles-29411235-t7kx5\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.512641 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:00 crc kubenswrapper[4813]: I1202 11:15:00.981188 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5"] Dec 02 11:15:01 crc kubenswrapper[4813]: I1202 11:15:01.129505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" event={"ID":"90315498-62a4-4dd5-a96c-159135299510","Type":"ContainerStarted","Data":"24a08f6f90f413dbcc3aa23ab651686fb9fe7e4dd35965594f30207455322b84"} Dec 02 11:15:01 crc kubenswrapper[4813]: I1202 11:15:01.129764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" event={"ID":"90315498-62a4-4dd5-a96c-159135299510","Type":"ContainerStarted","Data":"6543d3d2f6de5ed649b7828bb0c596f6130d64da39380e63f233967e3bed2392"} Dec 02 11:15:01 crc kubenswrapper[4813]: I1202 11:15:01.147366 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" podStartSLOduration=1.147350927 podStartE2EDuration="1.147350927s" podCreationTimestamp="2025-12-02 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:15:01.142035736 +0000 UTC m=+4025.337210038" watchObservedRunningTime="2025-12-02 11:15:01.147350927 +0000 UTC m=+4025.342525229" Dec 02 11:15:02 crc kubenswrapper[4813]: I1202 11:15:02.147278 4813 generic.go:334] "Generic (PLEG): container finished" podID="90315498-62a4-4dd5-a96c-159135299510" containerID="24a08f6f90f413dbcc3aa23ab651686fb9fe7e4dd35965594f30207455322b84" exitCode=0 Dec 02 11:15:02 crc kubenswrapper[4813]: I1202 11:15:02.147386 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" event={"ID":"90315498-62a4-4dd5-a96c-159135299510","Type":"ContainerDied","Data":"24a08f6f90f413dbcc3aa23ab651686fb9fe7e4dd35965594f30207455322b84"} Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.539373 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.721455 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx5gw\" (UniqueName: \"kubernetes.io/projected/90315498-62a4-4dd5-a96c-159135299510-kube-api-access-gx5gw\") pod \"90315498-62a4-4dd5-a96c-159135299510\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.721692 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90315498-62a4-4dd5-a96c-159135299510-secret-volume\") pod \"90315498-62a4-4dd5-a96c-159135299510\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.721751 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90315498-62a4-4dd5-a96c-159135299510-config-volume\") pod \"90315498-62a4-4dd5-a96c-159135299510\" (UID: \"90315498-62a4-4dd5-a96c-159135299510\") " Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.722700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90315498-62a4-4dd5-a96c-159135299510-config-volume" (OuterVolumeSpecName: "config-volume") pod "90315498-62a4-4dd5-a96c-159135299510" (UID: "90315498-62a4-4dd5-a96c-159135299510"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.726927 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90315498-62a4-4dd5-a96c-159135299510-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90315498-62a4-4dd5-a96c-159135299510" (UID: "90315498-62a4-4dd5-a96c-159135299510"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.727089 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90315498-62a4-4dd5-a96c-159135299510-kube-api-access-gx5gw" (OuterVolumeSpecName: "kube-api-access-gx5gw") pod "90315498-62a4-4dd5-a96c-159135299510" (UID: "90315498-62a4-4dd5-a96c-159135299510"). InnerVolumeSpecName "kube-api-access-gx5gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.824256 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx5gw\" (UniqueName: \"kubernetes.io/projected/90315498-62a4-4dd5-a96c-159135299510-kube-api-access-gx5gw\") on node \"crc\" DevicePath \"\"" Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.824283 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90315498-62a4-4dd5-a96c-159135299510-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:15:03 crc kubenswrapper[4813]: I1202 11:15:03.824293 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90315498-62a4-4dd5-a96c-159135299510-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:15:04 crc kubenswrapper[4813]: I1202 11:15:04.165729 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" event={"ID":"90315498-62a4-4dd5-a96c-159135299510","Type":"ContainerDied","Data":"6543d3d2f6de5ed649b7828bb0c596f6130d64da39380e63f233967e3bed2392"} Dec 02 11:15:04 crc kubenswrapper[4813]: I1202 11:15:04.165772 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411235-t7kx5" Dec 02 11:15:04 crc kubenswrapper[4813]: I1202 11:15:04.165773 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6543d3d2f6de5ed649b7828bb0c596f6130d64da39380e63f233967e3bed2392" Dec 02 11:15:04 crc kubenswrapper[4813]: I1202 11:15:04.273899 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:15:04 crc kubenswrapper[4813]: I1202 11:15:04.274256 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:15:04 crc kubenswrapper[4813]: I1202 11:15:04.614896 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr"] Dec 02 11:15:04 crc kubenswrapper[4813]: I1202 11:15:04.626959 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411190-nz6rr"] Dec 02 11:15:06 crc kubenswrapper[4813]: I1202 11:15:06.082138 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa4c2118-f078-4715-9077-171056d01b9b" path="/var/lib/kubelet/pods/aa4c2118-f078-4715-9077-171056d01b9b/volumes" Dec 02 11:15:06 crc kubenswrapper[4813]: I1202 11:15:06.569043 4813 scope.go:117] "RemoveContainer" containerID="e804c712acd104654d7ade4ba9b7883f653f7423ae5a524fec41dc586d571286" Dec 02 11:15:34 crc kubenswrapper[4813]: I1202 11:15:34.273547 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:15:34 crc kubenswrapper[4813]: I1202 11:15:34.274129 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.273292 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.274980 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.275143 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.276160 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31d0230e7c47a3ecf452dd08a1313f6b13c04ad9ab13733661560f4cc875a1fa"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.276376 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://31d0230e7c47a3ecf452dd08a1313f6b13c04ad9ab13733661560f4cc875a1fa" gracePeriod=600 Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.814430 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="31d0230e7c47a3ecf452dd08a1313f6b13c04ad9ab13733661560f4cc875a1fa" exitCode=0 Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.814517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"31d0230e7c47a3ecf452dd08a1313f6b13c04ad9ab13733661560f4cc875a1fa"} Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.815006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f"} Dec 02 11:16:04 crc kubenswrapper[4813]: I1202 11:16:04.815035 4813 scope.go:117] "RemoveContainer" containerID="de8f6ac0be5b8d2a4f4199b1fe96ec6dd5ae95fdaa0edb49ca2e6460ee882b46" Dec 02 11:18:04 crc kubenswrapper[4813]: I1202 11:18:04.273444 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:18:04 crc kubenswrapper[4813]: I1202 11:18:04.275085 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:18:34 crc kubenswrapper[4813]: I1202 11:18:34.273625 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:18:34 crc kubenswrapper[4813]: I1202 11:18:34.274420 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.765659 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6r8t"] Dec 02 11:18:38 crc kubenswrapper[4813]: E1202 11:18:38.767234 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90315498-62a4-4dd5-a96c-159135299510" containerName="collect-profiles" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.767271 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="90315498-62a4-4dd5-a96c-159135299510" containerName="collect-profiles" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.767769 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="90315498-62a4-4dd5-a96c-159135299510" containerName="collect-profiles" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.770997 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.776019 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6r8t"] Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.882222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-utilities\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.882553 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-catalog-content\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.882650 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rw7\" (UniqueName: \"kubernetes.io/projected/5d89d074-d930-4a02-851f-2cba0a938714-kube-api-access-m8rw7\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.985139 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-utilities\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.985262 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-catalog-content\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.985390 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8rw7\" (UniqueName: \"kubernetes.io/projected/5d89d074-d930-4a02-851f-2cba0a938714-kube-api-access-m8rw7\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.986303 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-utilities\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:38 crc kubenswrapper[4813]: I1202 11:18:38.986593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-catalog-content\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:39 crc kubenswrapper[4813]: I1202 11:18:39.012161 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8rw7\" (UniqueName: \"kubernetes.io/projected/5d89d074-d930-4a02-851f-2cba0a938714-kube-api-access-m8rw7\") pod \"certified-operators-c6r8t\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:39 crc kubenswrapper[4813]: I1202 11:18:39.165567 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:39 crc kubenswrapper[4813]: I1202 11:18:39.814414 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6r8t"] Dec 02 11:18:40 crc kubenswrapper[4813]: I1202 11:18:40.284010 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6r8t" event={"ID":"5d89d074-d930-4a02-851f-2cba0a938714","Type":"ContainerStarted","Data":"22ff80c93d27851d1a1bc136b0e4dbaed464e02a56ab59ef42ec5ab1efbfa197"} Dec 02 11:18:41 crc kubenswrapper[4813]: I1202 11:18:41.295986 4813 generic.go:334] "Generic (PLEG): container finished" podID="5d89d074-d930-4a02-851f-2cba0a938714" containerID="455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8" exitCode=0 Dec 02 11:18:41 crc kubenswrapper[4813]: I1202 11:18:41.296184 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6r8t" event={"ID":"5d89d074-d930-4a02-851f-2cba0a938714","Type":"ContainerDied","Data":"455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8"} Dec 02 11:18:41 crc kubenswrapper[4813]: I1202 11:18:41.299107 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:18:42 crc kubenswrapper[4813]: I1202 11:18:42.306654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6r8t" event={"ID":"5d89d074-d930-4a02-851f-2cba0a938714","Type":"ContainerStarted","Data":"8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46"} Dec 02 11:18:43 crc kubenswrapper[4813]: I1202 11:18:43.321323 4813 generic.go:334] "Generic (PLEG): container finished" podID="5d89d074-d930-4a02-851f-2cba0a938714" containerID="8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46" exitCode=0 Dec 02 11:18:43 crc kubenswrapper[4813]: I1202 11:18:43.321443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6r8t" event={"ID":"5d89d074-d930-4a02-851f-2cba0a938714","Type":"ContainerDied","Data":"8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46"} Dec 02 11:18:44 crc kubenswrapper[4813]: I1202 11:18:44.332294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6r8t" event={"ID":"5d89d074-d930-4a02-851f-2cba0a938714","Type":"ContainerStarted","Data":"8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce"} Dec 02 11:18:44 crc kubenswrapper[4813]: I1202 11:18:44.362746 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6r8t" podStartSLOduration=3.658413457 podStartE2EDuration="6.362722734s" podCreationTimestamp="2025-12-02 11:18:38 +0000 UTC" firstStartedPulling="2025-12-02 11:18:41.29866751 +0000 UTC m=+4245.493841812" lastFinishedPulling="2025-12-02 11:18:44.002976787 +0000 UTC m=+4248.198151089" observedRunningTime="2025-12-02 11:18:44.357299711 +0000 UTC m=+4248.552474033" watchObservedRunningTime="2025-12-02 11:18:44.362722734 +0000 UTC m=+4248.557897036" Dec 02 11:18:49 crc kubenswrapper[4813]: I1202 11:18:49.166112 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:49 crc kubenswrapper[4813]: I1202 11:18:49.167614 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:49 crc kubenswrapper[4813]: I1202 11:18:49.218065 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:49 crc kubenswrapper[4813]: I1202 11:18:49.442924 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:49 crc kubenswrapper[4813]: I1202 11:18:49.496536 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6r8t"] Dec 02 11:18:51 crc kubenswrapper[4813]: I1202 11:18:51.397429 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c6r8t" podUID="5d89d074-d930-4a02-851f-2cba0a938714" containerName="registry-server" containerID="cri-o://8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce" gracePeriod=2 Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.315414 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.414957 4813 generic.go:334] "Generic (PLEG): container finished" podID="5d89d074-d930-4a02-851f-2cba0a938714" containerID="8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce" exitCode=0 Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.415034 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6r8t" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.415028 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6r8t" event={"ID":"5d89d074-d930-4a02-851f-2cba0a938714","Type":"ContainerDied","Data":"8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce"} Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.415538 4813 scope.go:117] "RemoveContainer" containerID="8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.416322 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6r8t" event={"ID":"5d89d074-d930-4a02-851f-2cba0a938714","Type":"ContainerDied","Data":"22ff80c93d27851d1a1bc136b0e4dbaed464e02a56ab59ef42ec5ab1efbfa197"} Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.445545 4813 scope.go:117] "RemoveContainer" containerID="8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.451639 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-utilities\") pod \"5d89d074-d930-4a02-851f-2cba0a938714\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.451747 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-catalog-content\") pod \"5d89d074-d930-4a02-851f-2cba0a938714\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.451787 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8rw7\" (UniqueName: \"kubernetes.io/projected/5d89d074-d930-4a02-851f-2cba0a938714-kube-api-access-m8rw7\") pod \"5d89d074-d930-4a02-851f-2cba0a938714\" (UID: \"5d89d074-d930-4a02-851f-2cba0a938714\") " Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.453181 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-utilities" (OuterVolumeSpecName: "utilities") pod "5d89d074-d930-4a02-851f-2cba0a938714" (UID: "5d89d074-d930-4a02-851f-2cba0a938714"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.458270 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d89d074-d930-4a02-851f-2cba0a938714-kube-api-access-m8rw7" (OuterVolumeSpecName: "kube-api-access-m8rw7") pod "5d89d074-d930-4a02-851f-2cba0a938714" (UID: "5d89d074-d930-4a02-851f-2cba0a938714"). InnerVolumeSpecName "kube-api-access-m8rw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.476491 4813 scope.go:117] "RemoveContainer" containerID="455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.500990 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d89d074-d930-4a02-851f-2cba0a938714" (UID: "5d89d074-d930-4a02-851f-2cba0a938714"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.549670 4813 scope.go:117] "RemoveContainer" containerID="8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce" Dec 02 11:18:52 crc kubenswrapper[4813]: E1202 11:18:52.550317 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce\": container with ID starting with 8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce not found: ID does not exist" containerID="8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.550347 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce"} err="failed to get container status \"8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce\": rpc error: code = NotFound desc = could not find container \"8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce\": container with ID starting with 8451c69dc19f212d5694a610d9fb4532167a9911495fec6693b5ae72c80539ce not found: ID does not exist" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.550388 4813 scope.go:117] "RemoveContainer" containerID="8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46" Dec 02 11:18:52 crc kubenswrapper[4813]: E1202 11:18:52.550694 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46\": container with ID starting with 8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46 not found: ID does not exist" containerID="8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.550733 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46"} err="failed to get container status \"8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46\": rpc error: code = NotFound desc = could not find container \"8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46\": container with ID starting with 8ec8ddc71f802bb8fb8bb7777537b9b7fca9d71d805ac5936ee5f5ebffe9de46 not found: ID does not exist" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.550749 4813 scope.go:117] "RemoveContainer" containerID="455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8" Dec 02 11:18:52 crc kubenswrapper[4813]: E1202 11:18:52.551001 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8\": container with ID starting with 455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8 not found: ID does not exist" containerID="455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.551021 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8"} err="failed to get container status \"455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8\": rpc error: code = NotFound desc = could not find container \"455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8\": container with ID starting with 455f284379629e341fb62adab98006a91d50ab2348c6983f98aea77cbe7fc1e8 not found: ID does not exist" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.554505 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.554541 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d89d074-d930-4a02-851f-2cba0a938714-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.554550 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8rw7\" (UniqueName: \"kubernetes.io/projected/5d89d074-d930-4a02-851f-2cba0a938714-kube-api-access-m8rw7\") on node \"crc\" DevicePath \"\"" Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.759042 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6r8t"] Dec 02 11:18:52 crc kubenswrapper[4813]: I1202 11:18:52.767223 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c6r8t"] Dec 02 11:18:54 crc kubenswrapper[4813]: I1202 11:18:54.084350 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d89d074-d930-4a02-851f-2cba0a938714" path="/var/lib/kubelet/pods/5d89d074-d930-4a02-851f-2cba0a938714/volumes" Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.273401 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.273965 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.274034 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.275179 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.275282 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" gracePeriod=600 Dec 02 11:19:04 crc kubenswrapper[4813]: E1202 11:19:04.421650 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.543603 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" exitCode=0 Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.543908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f"} Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.543939 4813 scope.go:117] "RemoveContainer" containerID="31d0230e7c47a3ecf452dd08a1313f6b13c04ad9ab13733661560f4cc875a1fa" Dec 02 11:19:04 crc kubenswrapper[4813]: I1202 11:19:04.544632 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:19:04 crc kubenswrapper[4813]: E1202 11:19:04.544857 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:19:17 crc kubenswrapper[4813]: I1202 11:19:17.067917 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:19:17 crc kubenswrapper[4813]: E1202 11:19:17.068776 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.000320 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r8vzq"] Dec 02 11:19:21 crc kubenswrapper[4813]: E1202 11:19:21.001115 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d89d074-d930-4a02-851f-2cba0a938714" containerName="registry-server" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.001126 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d89d074-d930-4a02-851f-2cba0a938714" containerName="registry-server" Dec 02 11:19:21 crc kubenswrapper[4813]: E1202 11:19:21.001139 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d89d074-d930-4a02-851f-2cba0a938714" containerName="extract-content" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.001146 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d89d074-d930-4a02-851f-2cba0a938714" containerName="extract-content" Dec 02 11:19:21 crc kubenswrapper[4813]: E1202 11:19:21.001175 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d89d074-d930-4a02-851f-2cba0a938714" containerName="extract-utilities" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.001181 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d89d074-d930-4a02-851f-2cba0a938714" containerName="extract-utilities" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.001493 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d89d074-d930-4a02-851f-2cba0a938714" containerName="registry-server" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.003024 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.014506 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r8vzq"] Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.117795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-utilities\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.118097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmcq\" (UniqueName: \"kubernetes.io/projected/a47f1fa8-fe50-4f24-954a-041c9b40c70d-kube-api-access-dfmcq\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.118123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-catalog-content\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.220736 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-utilities\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.220833 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmcq\" (UniqueName: \"kubernetes.io/projected/a47f1fa8-fe50-4f24-954a-041c9b40c70d-kube-api-access-dfmcq\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.220869 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-catalog-content\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.221332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-utilities\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.221911 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-catalog-content\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.489862 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmcq\" (UniqueName: \"kubernetes.io/projected/a47f1fa8-fe50-4f24-954a-041c9b40c70d-kube-api-access-dfmcq\") pod \"redhat-operators-r8vzq\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:21 crc kubenswrapper[4813]: I1202 11:19:21.632634 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:22 crc kubenswrapper[4813]: I1202 11:19:22.110761 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r8vzq"] Dec 02 11:19:22 crc kubenswrapper[4813]: I1202 11:19:22.695391 4813 generic.go:334] "Generic (PLEG): container finished" podID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerID="8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b" exitCode=0 Dec 02 11:19:22 crc kubenswrapper[4813]: I1202 11:19:22.695490 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8vzq" event={"ID":"a47f1fa8-fe50-4f24-954a-041c9b40c70d","Type":"ContainerDied","Data":"8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b"} Dec 02 11:19:22 crc kubenswrapper[4813]: I1202 11:19:22.695892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8vzq" event={"ID":"a47f1fa8-fe50-4f24-954a-041c9b40c70d","Type":"ContainerStarted","Data":"4c1165bf4fa7b1ff1707cc5ae3cb35f43ee5e6897d14d1f75722a2f1bfb8fcad"} Dec 02 11:19:24 crc kubenswrapper[4813]: I1202 11:19:24.715190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8vzq" event={"ID":"a47f1fa8-fe50-4f24-954a-041c9b40c70d","Type":"ContainerStarted","Data":"3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617"} Dec 02 11:19:26 crc kubenswrapper[4813]: I1202 11:19:26.734930 4813 generic.go:334] "Generic (PLEG): container finished" podID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerID="3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617" exitCode=0 Dec 02 11:19:26 crc kubenswrapper[4813]: I1202 11:19:26.735240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8vzq" event={"ID":"a47f1fa8-fe50-4f24-954a-041c9b40c70d","Type":"ContainerDied","Data":"3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617"} Dec 02 11:19:27 crc kubenswrapper[4813]: I1202 11:19:27.746929 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8vzq" event={"ID":"a47f1fa8-fe50-4f24-954a-041c9b40c70d","Type":"ContainerStarted","Data":"8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112"} Dec 02 11:19:27 crc kubenswrapper[4813]: I1202 11:19:27.776304 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r8vzq" podStartSLOduration=3.228327726 podStartE2EDuration="7.776281049s" podCreationTimestamp="2025-12-02 11:19:20 +0000 UTC" firstStartedPulling="2025-12-02 11:19:22.699258477 +0000 UTC m=+4286.894432779" lastFinishedPulling="2025-12-02 11:19:27.24721179 +0000 UTC m=+4291.442386102" observedRunningTime="2025-12-02 11:19:27.768288992 +0000 UTC m=+4291.963463314" watchObservedRunningTime="2025-12-02 11:19:27.776281049 +0000 UTC m=+4291.971455351" Dec 02 11:19:28 crc kubenswrapper[4813]: I1202 11:19:28.067977 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:19:28 crc kubenswrapper[4813]: E1202 11:19:28.068618 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:19:31 crc kubenswrapper[4813]: I1202 11:19:31.633046 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:31 crc kubenswrapper[4813]: I1202 11:19:31.633663 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:32 crc kubenswrapper[4813]: I1202 11:19:32.925422 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r8vzq" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="registry-server" probeResult="failure" output=< Dec 02 11:19:32 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 11:19:32 crc kubenswrapper[4813]: > Dec 02 11:19:41 crc kubenswrapper[4813]: I1202 11:19:41.679024 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:41 crc kubenswrapper[4813]: I1202 11:19:41.752127 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:41 crc kubenswrapper[4813]: I1202 11:19:41.914238 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r8vzq"] Dec 02 11:19:42 crc kubenswrapper[4813]: I1202 11:19:42.068570 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:19:42 crc kubenswrapper[4813]: E1202 11:19:42.069061 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:19:42 crc kubenswrapper[4813]: I1202 11:19:42.880689 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r8vzq" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="registry-server" containerID="cri-o://8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112" gracePeriod=2 Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.798033 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.893321 4813 generic.go:334] "Generic (PLEG): container finished" podID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerID="8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112" exitCode=0 Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.893369 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8vzq" event={"ID":"a47f1fa8-fe50-4f24-954a-041c9b40c70d","Type":"ContainerDied","Data":"8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112"} Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.893406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8vzq" event={"ID":"a47f1fa8-fe50-4f24-954a-041c9b40c70d","Type":"ContainerDied","Data":"4c1165bf4fa7b1ff1707cc5ae3cb35f43ee5e6897d14d1f75722a2f1bfb8fcad"} Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.893401 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8vzq" Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.893422 4813 scope.go:117] "RemoveContainer" containerID="8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112" Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.918636 4813 scope.go:117] "RemoveContainer" containerID="3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617" Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.943580 4813 scope.go:117] "RemoveContainer" containerID="8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b" Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.968504 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfmcq\" (UniqueName: \"kubernetes.io/projected/a47f1fa8-fe50-4f24-954a-041c9b40c70d-kube-api-access-dfmcq\") pod \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.968589 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-catalog-content\") pod \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.968662 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-utilities\") pod \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\" (UID: \"a47f1fa8-fe50-4f24-954a-041c9b40c70d\") " Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.969395 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-utilities" (OuterVolumeSpecName: "utilities") pod "a47f1fa8-fe50-4f24-954a-041c9b40c70d" (UID: "a47f1fa8-fe50-4f24-954a-041c9b40c70d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:19:43 crc kubenswrapper[4813]: I1202 11:19:43.974625 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a47f1fa8-fe50-4f24-954a-041c9b40c70d-kube-api-access-dfmcq" (OuterVolumeSpecName: "kube-api-access-dfmcq") pod "a47f1fa8-fe50-4f24-954a-041c9b40c70d" (UID: "a47f1fa8-fe50-4f24-954a-041c9b40c70d"). InnerVolumeSpecName "kube-api-access-dfmcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.042602 4813 scope.go:117] "RemoveContainer" containerID="8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112" Dec 02 11:19:44 crc kubenswrapper[4813]: E1202 11:19:44.043335 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112\": container with ID starting with 8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112 not found: ID does not exist" containerID="8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.043472 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112"} err="failed to get container status \"8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112\": rpc error: code = NotFound desc = could not find container \"8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112\": container with ID starting with 8d0a8fc0995e7b97ee7c7f01445a441f3a435358e2ef9dd0bcf8509354af8112 not found: ID does not exist" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.043589 4813 scope.go:117] "RemoveContainer" containerID="3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617" Dec 02 11:19:44 crc kubenswrapper[4813]: E1202 11:19:44.044239 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617\": container with ID starting with 3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617 not found: ID does not exist" containerID="3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.044303 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617"} err="failed to get container status \"3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617\": rpc error: code = NotFound desc = could not find container \"3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617\": container with ID starting with 3e77aacf228c9a98eaf5446d1150a60fe2161607e29d3bec5e022aece5c13617 not found: ID does not exist" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.044347 4813 scope.go:117] "RemoveContainer" containerID="8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b" Dec 02 11:19:44 crc kubenswrapper[4813]: E1202 11:19:44.044704 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b\": container with ID starting with 8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b not found: ID does not exist" containerID="8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.044743 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b"} err="failed to get container status \"8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b\": rpc error: code = NotFound desc = could not find container \"8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b\": container with ID starting with 8626ef4c97c8b6ac900b2d597b648a559b2a5f0869409b9b87e2452cabe8332b not found: ID does not exist" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.070803 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfmcq\" (UniqueName: \"kubernetes.io/projected/a47f1fa8-fe50-4f24-954a-041c9b40c70d-kube-api-access-dfmcq\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.070843 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.090237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a47f1fa8-fe50-4f24-954a-041c9b40c70d" (UID: "a47f1fa8-fe50-4f24-954a-041c9b40c70d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.173231 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a47f1fa8-fe50-4f24-954a-041c9b40c70d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.236635 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r8vzq"] Dec 02 11:19:44 crc kubenswrapper[4813]: I1202 11:19:44.250552 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r8vzq"] Dec 02 11:19:46 crc kubenswrapper[4813]: I1202 11:19:46.090431 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" path="/var/lib/kubelet/pods/a47f1fa8-fe50-4f24-954a-041c9b40c70d/volumes" Dec 02 11:19:49 crc kubenswrapper[4813]: I1202 11:19:49.041770 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-fl2xz"] Dec 02 11:19:49 crc kubenswrapper[4813]: I1202 11:19:49.051695 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-d1b0-account-create-update-z9nlm"] Dec 02 11:19:49 crc kubenswrapper[4813]: I1202 11:19:49.062658 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-fl2xz"] Dec 02 11:19:49 crc kubenswrapper[4813]: I1202 11:19:49.073702 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-d1b0-account-create-update-z9nlm"] Dec 02 11:19:50 crc kubenswrapper[4813]: I1202 11:19:50.087891 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edbccb4-0310-4649-aede-b296ed4dbf23" path="/var/lib/kubelet/pods/1edbccb4-0310-4649-aede-b296ed4dbf23/volumes" Dec 02 11:19:50 crc kubenswrapper[4813]: I1202 11:19:50.089028 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd894e4-9c8a-4553-953a-c954003e97cb" path="/var/lib/kubelet/pods/dbd894e4-9c8a-4553-953a-c954003e97cb/volumes" Dec 02 11:19:56 crc kubenswrapper[4813]: I1202 11:19:56.075915 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:19:56 crc kubenswrapper[4813]: E1202 11:19:56.076770 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:20:06 crc kubenswrapper[4813]: I1202 11:20:06.725751 4813 scope.go:117] "RemoveContainer" containerID="90d2fa373e19cc2cc1a421eaa83f90b15d4b6c736eb859c4e994b8f1b1c6a40d" Dec 02 11:20:06 crc kubenswrapper[4813]: I1202 11:20:06.770623 4813 scope.go:117] "RemoveContainer" containerID="b3593bb1cabc8dd56962de5f743b403cd9ae20fc6bad153fba94b7f33d8b7691" Dec 02 11:20:10 crc kubenswrapper[4813]: I1202 11:20:10.067677 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:20:10 crc kubenswrapper[4813]: E1202 11:20:10.068581 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:20:20 crc kubenswrapper[4813]: I1202 11:20:20.055880 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-p5xnm"] Dec 02 11:20:20 crc kubenswrapper[4813]: I1202 11:20:20.082162 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-p5xnm"] Dec 02 11:20:22 crc kubenswrapper[4813]: I1202 11:20:22.085757 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bfd912-d75b-48af-8433-1b5d24d50856" path="/var/lib/kubelet/pods/17bfd912-d75b-48af-8433-1b5d24d50856/volumes" Dec 02 11:20:24 crc kubenswrapper[4813]: I1202 11:20:24.069160 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:20:24 crc kubenswrapper[4813]: E1202 11:20:24.070212 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:20:35 crc kubenswrapper[4813]: I1202 11:20:35.068696 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:20:35 crc kubenswrapper[4813]: E1202 11:20:35.069470 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:20:46 crc kubenswrapper[4813]: I1202 11:20:46.086792 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:20:46 crc kubenswrapper[4813]: E1202 11:20:46.087650 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:21:00 crc kubenswrapper[4813]: I1202 11:21:00.069377 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:21:00 crc kubenswrapper[4813]: E1202 11:21:00.070607 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:21:06 crc kubenswrapper[4813]: I1202 11:21:06.921375 4813 scope.go:117] "RemoveContainer" containerID="031eee95c4f032617bad83ea80f169d43b557160bf91e66593b999253cb701aa" Dec 02 11:21:14 crc kubenswrapper[4813]: I1202 11:21:14.068157 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:21:14 crc kubenswrapper[4813]: E1202 11:21:14.068837 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:21:27 crc kubenswrapper[4813]: I1202 11:21:27.068898 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:21:27 crc kubenswrapper[4813]: E1202 11:21:27.070197 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:21:42 crc kubenswrapper[4813]: I1202 11:21:42.068263 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:21:42 crc kubenswrapper[4813]: E1202 11:21:42.069324 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:21:54 crc kubenswrapper[4813]: I1202 11:21:54.068205 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:21:54 crc kubenswrapper[4813]: E1202 11:21:54.068937 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.832098 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c8xpx"] Dec 02 11:22:07 crc kubenswrapper[4813]: E1202 11:22:07.833242 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="extract-utilities" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.833263 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="extract-utilities" Dec 02 11:22:07 crc kubenswrapper[4813]: E1202 11:22:07.833295 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="extract-content" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.833306 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="extract-content" Dec 02 11:22:07 crc kubenswrapper[4813]: E1202 11:22:07.833326 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="registry-server" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.833338 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="registry-server" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.833624 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47f1fa8-fe50-4f24-954a-041c9b40c70d" containerName="registry-server" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.835926 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.840240 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8xpx"] Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.868921 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-utilities\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.869280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-catalog-content\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.869408 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrrqg\" (UniqueName: \"kubernetes.io/projected/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-kube-api-access-zrrqg\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.971162 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-catalog-content\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.971722 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrrqg\" (UniqueName: \"kubernetes.io/projected/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-kube-api-access-zrrqg\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.971982 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-catalog-content\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.972134 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-utilities\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:07 crc kubenswrapper[4813]: I1202 11:22:07.987920 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-utilities\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:08 crc kubenswrapper[4813]: I1202 11:22:08.024020 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrrqg\" (UniqueName: \"kubernetes.io/projected/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-kube-api-access-zrrqg\") pod \"community-operators-c8xpx\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:08 crc kubenswrapper[4813]: I1202 11:22:08.068492 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:22:08 crc kubenswrapper[4813]: E1202 11:22:08.068736 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:22:08 crc kubenswrapper[4813]: I1202 11:22:08.195273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:08 crc kubenswrapper[4813]: I1202 11:22:08.807535 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8xpx"] Dec 02 11:22:09 crc kubenswrapper[4813]: I1202 11:22:09.407945 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerID="4b911631441ac58d11f78cb93dfe79b67bf29459a091028e84e8f1c3e28d97ef" exitCode=0 Dec 02 11:22:09 crc kubenswrapper[4813]: I1202 11:22:09.407988 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xpx" event={"ID":"1c6fc647-d16e-493f-9e0a-cffff8f96c8a","Type":"ContainerDied","Data":"4b911631441ac58d11f78cb93dfe79b67bf29459a091028e84e8f1c3e28d97ef"} Dec 02 11:22:09 crc kubenswrapper[4813]: I1202 11:22:09.408031 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xpx" event={"ID":"1c6fc647-d16e-493f-9e0a-cffff8f96c8a","Type":"ContainerStarted","Data":"edba54ceebf1d4f95a8ad3ebed5d5173855eaa40195cf9399424952ce2dd9f78"} Dec 02 11:22:10 crc kubenswrapper[4813]: I1202 11:22:10.416530 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xpx" event={"ID":"1c6fc647-d16e-493f-9e0a-cffff8f96c8a","Type":"ContainerStarted","Data":"e54e5f81b581c4b0b22337bbc0317fdc059bb3657de6cd106f521a25faf63af6"} Dec 02 11:22:11 crc kubenswrapper[4813]: I1202 11:22:11.430650 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerID="e54e5f81b581c4b0b22337bbc0317fdc059bb3657de6cd106f521a25faf63af6" exitCode=0 Dec 02 11:22:11 crc kubenswrapper[4813]: I1202 11:22:11.430740 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xpx" event={"ID":"1c6fc647-d16e-493f-9e0a-cffff8f96c8a","Type":"ContainerDied","Data":"e54e5f81b581c4b0b22337bbc0317fdc059bb3657de6cd106f521a25faf63af6"} Dec 02 11:22:12 crc kubenswrapper[4813]: I1202 11:22:12.445679 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xpx" event={"ID":"1c6fc647-d16e-493f-9e0a-cffff8f96c8a","Type":"ContainerStarted","Data":"5c431684ede625e2049e2fb839d4dc14ec9e287061e6020ddb9d4eb1f39de755"} Dec 02 11:22:12 crc kubenswrapper[4813]: I1202 11:22:12.470415 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c8xpx" podStartSLOduration=2.8958336940000002 podStartE2EDuration="5.470390177s" podCreationTimestamp="2025-12-02 11:22:07 +0000 UTC" firstStartedPulling="2025-12-02 11:22:09.40994795 +0000 UTC m=+4453.605122252" lastFinishedPulling="2025-12-02 11:22:11.984504423 +0000 UTC m=+4456.179678735" observedRunningTime="2025-12-02 11:22:12.462436151 +0000 UTC m=+4456.657610523" watchObservedRunningTime="2025-12-02 11:22:12.470390177 +0000 UTC m=+4456.665564509" Dec 02 11:22:18 crc kubenswrapper[4813]: I1202 11:22:18.196369 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:18 crc kubenswrapper[4813]: I1202 11:22:18.197803 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:18 crc kubenswrapper[4813]: I1202 11:22:18.252136 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:18 crc kubenswrapper[4813]: I1202 11:22:18.563422 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:18 crc kubenswrapper[4813]: I1202 11:22:18.676286 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8xpx"] Dec 02 11:22:20 crc kubenswrapper[4813]: I1202 11:22:20.529778 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c8xpx" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerName="registry-server" containerID="cri-o://5c431684ede625e2049e2fb839d4dc14ec9e287061e6020ddb9d4eb1f39de755" gracePeriod=2 Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.537632 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerID="5c431684ede625e2049e2fb839d4dc14ec9e287061e6020ddb9d4eb1f39de755" exitCode=0 Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.537931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xpx" event={"ID":"1c6fc647-d16e-493f-9e0a-cffff8f96c8a","Type":"ContainerDied","Data":"5c431684ede625e2049e2fb839d4dc14ec9e287061e6020ddb9d4eb1f39de755"} Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.697285 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.729374 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-catalog-content\") pod \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.729533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrrqg\" (UniqueName: \"kubernetes.io/projected/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-kube-api-access-zrrqg\") pod \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.729685 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-utilities\") pod \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\" (UID: \"1c6fc647-d16e-493f-9e0a-cffff8f96c8a\") " Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.733424 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-utilities" (OuterVolumeSpecName: "utilities") pod "1c6fc647-d16e-493f-9e0a-cffff8f96c8a" (UID: "1c6fc647-d16e-493f-9e0a-cffff8f96c8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.744564 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-kube-api-access-zrrqg" (OuterVolumeSpecName: "kube-api-access-zrrqg") pod "1c6fc647-d16e-493f-9e0a-cffff8f96c8a" (UID: "1c6fc647-d16e-493f-9e0a-cffff8f96c8a"). InnerVolumeSpecName "kube-api-access-zrrqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.803604 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c6fc647-d16e-493f-9e0a-cffff8f96c8a" (UID: "1c6fc647-d16e-493f-9e0a-cffff8f96c8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.833760 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.833800 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrrqg\" (UniqueName: \"kubernetes.io/projected/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-kube-api-access-zrrqg\") on node \"crc\" DevicePath \"\"" Dec 02 11:22:21 crc kubenswrapper[4813]: I1202 11:22:21.833815 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c6fc647-d16e-493f-9e0a-cffff8f96c8a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:22:22 crc kubenswrapper[4813]: I1202 11:22:22.068109 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:22:22 crc kubenswrapper[4813]: E1202 11:22:22.068419 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:22:22 crc kubenswrapper[4813]: I1202 11:22:22.554700 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xpx" event={"ID":"1c6fc647-d16e-493f-9e0a-cffff8f96c8a","Type":"ContainerDied","Data":"edba54ceebf1d4f95a8ad3ebed5d5173855eaa40195cf9399424952ce2dd9f78"} Dec 02 11:22:22 crc kubenswrapper[4813]: I1202 11:22:22.555256 4813 scope.go:117] "RemoveContainer" containerID="5c431684ede625e2049e2fb839d4dc14ec9e287061e6020ddb9d4eb1f39de755" Dec 02 11:22:22 crc kubenswrapper[4813]: I1202 11:22:22.554814 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8xpx" Dec 02 11:22:22 crc kubenswrapper[4813]: I1202 11:22:22.575025 4813 scope.go:117] "RemoveContainer" containerID="e54e5f81b581c4b0b22337bbc0317fdc059bb3657de6cd106f521a25faf63af6" Dec 02 11:22:22 crc kubenswrapper[4813]: I1202 11:22:22.585951 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8xpx"] Dec 02 11:22:22 crc kubenswrapper[4813]: I1202 11:22:22.590377 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c8xpx"] Dec 02 11:22:22 crc kubenswrapper[4813]: I1202 11:22:22.623511 4813 scope.go:117] "RemoveContainer" containerID="4b911631441ac58d11f78cb93dfe79b67bf29459a091028e84e8f1c3e28d97ef" Dec 02 11:22:24 crc kubenswrapper[4813]: I1202 11:22:24.089313 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" path="/var/lib/kubelet/pods/1c6fc647-d16e-493f-9e0a-cffff8f96c8a/volumes" Dec 02 11:22:33 crc kubenswrapper[4813]: I1202 11:22:33.069224 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:22:33 crc kubenswrapper[4813]: E1202 11:22:33.070150 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:22:47 crc kubenswrapper[4813]: I1202 11:22:47.068332 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:22:47 crc kubenswrapper[4813]: E1202 11:22:47.071310 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:23:01 crc kubenswrapper[4813]: I1202 11:23:01.067564 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:23:01 crc kubenswrapper[4813]: E1202 11:23:01.068417 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:23:14 crc kubenswrapper[4813]: I1202 11:23:14.069362 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:23:14 crc kubenswrapper[4813]: E1202 11:23:14.070256 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:23:26 crc kubenswrapper[4813]: I1202 11:23:26.068840 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:23:26 crc kubenswrapper[4813]: E1202 11:23:26.070404 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:23:39 crc kubenswrapper[4813]: I1202 11:23:39.067608 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:23:39 crc kubenswrapper[4813]: E1202 11:23:39.069540 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:23:52 crc kubenswrapper[4813]: I1202 11:23:52.070770 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:23:52 crc kubenswrapper[4813]: E1202 11:23:52.071525 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:24:03 crc kubenswrapper[4813]: I1202 11:24:03.069227 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:24:03 crc kubenswrapper[4813]: E1202 11:24:03.072998 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:24:17 crc kubenswrapper[4813]: I1202 11:24:17.069667 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:24:17 crc kubenswrapper[4813]: I1202 11:24:17.655823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"fc66c69d2ed116d9c2c499dd1870a147dfbb225a57f72561832da8893726569c"} Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.906801 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c5mpx"] Dec 02 11:24:26 crc kubenswrapper[4813]: E1202 11:24:26.908000 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerName="registry-server" Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.908021 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerName="registry-server" Dec 02 11:24:26 crc kubenswrapper[4813]: E1202 11:24:26.908042 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerName="extract-utilities" Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.908054 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerName="extract-utilities" Dec 02 11:24:26 crc kubenswrapper[4813]: E1202 11:24:26.908097 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerName="extract-content" Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.908111 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerName="extract-content" Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.908479 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6fc647-d16e-493f-9e0a-cffff8f96c8a" containerName="registry-server" Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.910823 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.930834 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5mpx"] Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.989623 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-catalog-content\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.989694 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-utilities\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:26 crc kubenswrapper[4813]: I1202 11:24:26.989758 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94wh\" (UniqueName: \"kubernetes.io/projected/54447db1-536b-4291-a465-393fd36a8a4b-kube-api-access-m94wh\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:27 crc kubenswrapper[4813]: I1202 11:24:27.091909 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-utilities\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:27 crc kubenswrapper[4813]: I1202 11:24:27.091972 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-catalog-content\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:27 crc kubenswrapper[4813]: I1202 11:24:27.092036 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94wh\" (UniqueName: \"kubernetes.io/projected/54447db1-536b-4291-a465-393fd36a8a4b-kube-api-access-m94wh\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:27 crc kubenswrapper[4813]: I1202 11:24:27.092773 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-utilities\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:27 crc kubenswrapper[4813]: I1202 11:24:27.093011 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-catalog-content\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:27 crc kubenswrapper[4813]: I1202 11:24:27.115941 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94wh\" (UniqueName: \"kubernetes.io/projected/54447db1-536b-4291-a465-393fd36a8a4b-kube-api-access-m94wh\") pod \"redhat-marketplace-c5mpx\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:27 crc kubenswrapper[4813]: I1202 11:24:27.251703 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:27 crc kubenswrapper[4813]: I1202 11:24:27.784984 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5mpx"] Dec 02 11:24:27 crc kubenswrapper[4813]: W1202 11:24:27.789688 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54447db1_536b_4291_a465_393fd36a8a4b.slice/crio-0d18b5318280887f80bc3236d262daabbaf728584e579c1ba89932c27407f8e5 WatchSource:0}: Error finding container 0d18b5318280887f80bc3236d262daabbaf728584e579c1ba89932c27407f8e5: Status 404 returned error can't find the container with id 0d18b5318280887f80bc3236d262daabbaf728584e579c1ba89932c27407f8e5 Dec 02 11:24:28 crc kubenswrapper[4813]: I1202 11:24:28.776117 4813 generic.go:334] "Generic (PLEG): container finished" podID="54447db1-536b-4291-a465-393fd36a8a4b" containerID="e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa" exitCode=0 Dec 02 11:24:28 crc kubenswrapper[4813]: I1202 11:24:28.776370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5mpx" event={"ID":"54447db1-536b-4291-a465-393fd36a8a4b","Type":"ContainerDied","Data":"e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa"} Dec 02 11:24:28 crc kubenswrapper[4813]: I1202 11:24:28.776486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5mpx" event={"ID":"54447db1-536b-4291-a465-393fd36a8a4b","Type":"ContainerStarted","Data":"0d18b5318280887f80bc3236d262daabbaf728584e579c1ba89932c27407f8e5"} Dec 02 11:24:28 crc kubenswrapper[4813]: I1202 11:24:28.778916 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:24:30 crc kubenswrapper[4813]: I1202 11:24:30.798160 4813 generic.go:334] "Generic (PLEG): container finished" podID="54447db1-536b-4291-a465-393fd36a8a4b" containerID="20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c" exitCode=0 Dec 02 11:24:30 crc kubenswrapper[4813]: I1202 11:24:30.798296 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5mpx" event={"ID":"54447db1-536b-4291-a465-393fd36a8a4b","Type":"ContainerDied","Data":"20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c"} Dec 02 11:24:31 crc kubenswrapper[4813]: I1202 11:24:31.811173 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5mpx" event={"ID":"54447db1-536b-4291-a465-393fd36a8a4b","Type":"ContainerStarted","Data":"1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6"} Dec 02 11:24:31 crc kubenswrapper[4813]: I1202 11:24:31.834997 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c5mpx" podStartSLOduration=3.335923735 podStartE2EDuration="5.834976676s" podCreationTimestamp="2025-12-02 11:24:26 +0000 UTC" firstStartedPulling="2025-12-02 11:24:28.778627926 +0000 UTC m=+4592.973802228" lastFinishedPulling="2025-12-02 11:24:31.277680827 +0000 UTC m=+4595.472855169" observedRunningTime="2025-12-02 11:24:31.833514865 +0000 UTC m=+4596.028689167" watchObservedRunningTime="2025-12-02 11:24:31.834976676 +0000 UTC m=+4596.030150978" Dec 02 11:24:37 crc kubenswrapper[4813]: I1202 11:24:37.252718 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:37 crc kubenswrapper[4813]: I1202 11:24:37.253406 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:37 crc kubenswrapper[4813]: I1202 11:24:37.343119 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:37 crc kubenswrapper[4813]: I1202 11:24:37.940398 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:37 crc kubenswrapper[4813]: I1202 11:24:37.993719 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5mpx"] Dec 02 11:24:39 crc kubenswrapper[4813]: I1202 11:24:39.889982 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c5mpx" podUID="54447db1-536b-4291-a465-393fd36a8a4b" containerName="registry-server" containerID="cri-o://1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6" gracePeriod=2 Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.459646 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.557459 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-utilities\") pod \"54447db1-536b-4291-a465-393fd36a8a4b\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.557977 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m94wh\" (UniqueName: \"kubernetes.io/projected/54447db1-536b-4291-a465-393fd36a8a4b-kube-api-access-m94wh\") pod \"54447db1-536b-4291-a465-393fd36a8a4b\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.558010 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-catalog-content\") pod \"54447db1-536b-4291-a465-393fd36a8a4b\" (UID: \"54447db1-536b-4291-a465-393fd36a8a4b\") " Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.559588 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-utilities" (OuterVolumeSpecName: "utilities") pod "54447db1-536b-4291-a465-393fd36a8a4b" (UID: "54447db1-536b-4291-a465-393fd36a8a4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.565971 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54447db1-536b-4291-a465-393fd36a8a4b-kube-api-access-m94wh" (OuterVolumeSpecName: "kube-api-access-m94wh") pod "54447db1-536b-4291-a465-393fd36a8a4b" (UID: "54447db1-536b-4291-a465-393fd36a8a4b"). InnerVolumeSpecName "kube-api-access-m94wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.588324 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54447db1-536b-4291-a465-393fd36a8a4b" (UID: "54447db1-536b-4291-a465-393fd36a8a4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.663862 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m94wh\" (UniqueName: \"kubernetes.io/projected/54447db1-536b-4291-a465-393fd36a8a4b-kube-api-access-m94wh\") on node \"crc\" DevicePath \"\"" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.663911 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.663925 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54447db1-536b-4291-a465-393fd36a8a4b-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.902362 4813 generic.go:334] "Generic (PLEG): container finished" podID="54447db1-536b-4291-a465-393fd36a8a4b" containerID="1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6" exitCode=0 Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.902412 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5mpx" event={"ID":"54447db1-536b-4291-a465-393fd36a8a4b","Type":"ContainerDied","Data":"1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6"} Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.902447 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5mpx" event={"ID":"54447db1-536b-4291-a465-393fd36a8a4b","Type":"ContainerDied","Data":"0d18b5318280887f80bc3236d262daabbaf728584e579c1ba89932c27407f8e5"} Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.902489 4813 scope.go:117] "RemoveContainer" containerID="1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.902510 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c5mpx" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.933342 4813 scope.go:117] "RemoveContainer" containerID="20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.943724 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5mpx"] Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.951670 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5mpx"] Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.969618 4813 scope.go:117] "RemoveContainer" containerID="e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.998200 4813 scope.go:117] "RemoveContainer" containerID="1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6" Dec 02 11:24:40 crc kubenswrapper[4813]: E1202 11:24:40.998579 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6\": container with ID starting with 1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6 not found: ID does not exist" containerID="1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.998624 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6"} err="failed to get container status \"1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6\": rpc error: code = NotFound desc = could not find container \"1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6\": container with ID starting with 1c3356b02756f133270e39e8dc15e012ecb1c013ffd1dec195c30c0067f704d6 not found: ID does not exist" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.998649 4813 scope.go:117] "RemoveContainer" containerID="20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c" Dec 02 11:24:40 crc kubenswrapper[4813]: E1202 11:24:40.998918 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c\": container with ID starting with 20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c not found: ID does not exist" containerID="20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.998944 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c"} err="failed to get container status \"20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c\": rpc error: code = NotFound desc = could not find container \"20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c\": container with ID starting with 20b1f3ca5286bfe611357daccd4ffd345e7faf12480551fa0ddd5131e2886f1c not found: ID does not exist" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.998958 4813 scope.go:117] "RemoveContainer" containerID="e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa" Dec 02 11:24:40 crc kubenswrapper[4813]: E1202 11:24:40.999219 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa\": container with ID starting with e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa not found: ID does not exist" containerID="e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa" Dec 02 11:24:40 crc kubenswrapper[4813]: I1202 11:24:40.999239 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa"} err="failed to get container status \"e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa\": rpc error: code = NotFound desc = could not find container \"e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa\": container with ID starting with e84439cafc03f3ac863aa6fa4fdacd2bb3c5bbb919fa621f653c3d81d34641fa not found: ID does not exist" Dec 02 11:24:42 crc kubenswrapper[4813]: I1202 11:24:42.079331 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54447db1-536b-4291-a465-393fd36a8a4b" path="/var/lib/kubelet/pods/54447db1-536b-4291-a465-393fd36a8a4b/volumes" Dec 02 11:26:34 crc kubenswrapper[4813]: I1202 11:26:34.274251 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:26:34 crc kubenswrapper[4813]: I1202 11:26:34.275925 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:27:04 crc kubenswrapper[4813]: I1202 11:27:04.274065 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:27:04 crc kubenswrapper[4813]: I1202 11:27:04.274587 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:27:34 crc kubenswrapper[4813]: I1202 11:27:34.274120 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:27:34 crc kubenswrapper[4813]: I1202 11:27:34.274585 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:27:34 crc kubenswrapper[4813]: I1202 11:27:34.274632 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 11:27:34 crc kubenswrapper[4813]: I1202 11:27:34.275419 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc66c69d2ed116d9c2c499dd1870a147dfbb225a57f72561832da8893726569c"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:27:34 crc kubenswrapper[4813]: I1202 11:27:34.275483 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://fc66c69d2ed116d9c2c499dd1870a147dfbb225a57f72561832da8893726569c" gracePeriod=600 Dec 02 11:27:34 crc kubenswrapper[4813]: I1202 11:27:34.563642 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="fc66c69d2ed116d9c2c499dd1870a147dfbb225a57f72561832da8893726569c" exitCode=0 Dec 02 11:27:34 crc kubenswrapper[4813]: I1202 11:27:34.563738 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"fc66c69d2ed116d9c2c499dd1870a147dfbb225a57f72561832da8893726569c"} Dec 02 11:27:34 crc kubenswrapper[4813]: I1202 11:27:34.564025 4813 scope.go:117] "RemoveContainer" containerID="dff5ef84815b4bc82156bef8bee671bf44bc3b4887a836f7017327d874c4263f" Dec 02 11:27:35 crc kubenswrapper[4813]: I1202 11:27:35.574801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1"} Dec 02 11:28:59 crc kubenswrapper[4813]: I1202 11:28:59.829816 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kdn2m"] Dec 02 11:28:59 crc kubenswrapper[4813]: E1202 11:28:59.830866 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54447db1-536b-4291-a465-393fd36a8a4b" containerName="extract-content" Dec 02 11:28:59 crc kubenswrapper[4813]: I1202 11:28:59.830880 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="54447db1-536b-4291-a465-393fd36a8a4b" containerName="extract-content" Dec 02 11:28:59 crc kubenswrapper[4813]: E1202 11:28:59.830894 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54447db1-536b-4291-a465-393fd36a8a4b" containerName="registry-server" Dec 02 11:28:59 crc kubenswrapper[4813]: I1202 11:28:59.830901 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="54447db1-536b-4291-a465-393fd36a8a4b" containerName="registry-server" Dec 02 11:28:59 crc kubenswrapper[4813]: E1202 11:28:59.830916 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54447db1-536b-4291-a465-393fd36a8a4b" containerName="extract-utilities" Dec 02 11:28:59 crc kubenswrapper[4813]: I1202 11:28:59.830923 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="54447db1-536b-4291-a465-393fd36a8a4b" containerName="extract-utilities" Dec 02 11:28:59 crc kubenswrapper[4813]: I1202 11:28:59.831127 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="54447db1-536b-4291-a465-393fd36a8a4b" containerName="registry-server" Dec 02 11:28:59 crc kubenswrapper[4813]: I1202 11:28:59.833121 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:28:59 crc kubenswrapper[4813]: I1202 11:28:59.847244 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdn2m"] Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.001326 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbtrh\" (UniqueName: \"kubernetes.io/projected/678d314c-7803-457a-8acc-cb557eb1e797-kube-api-access-lbtrh\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.001431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-catalog-content\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.001476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-utilities\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.103731 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbtrh\" (UniqueName: \"kubernetes.io/projected/678d314c-7803-457a-8acc-cb557eb1e797-kube-api-access-lbtrh\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.103839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-catalog-content\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.103879 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-utilities\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.104408 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-catalog-content\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.104437 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-utilities\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.135717 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbtrh\" (UniqueName: \"kubernetes.io/projected/678d314c-7803-457a-8acc-cb557eb1e797-kube-api-access-lbtrh\") pod \"certified-operators-kdn2m\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.214873 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:00 crc kubenswrapper[4813]: I1202 11:29:00.760218 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdn2m"] Dec 02 11:29:01 crc kubenswrapper[4813]: I1202 11:29:01.368384 4813 generic.go:334] "Generic (PLEG): container finished" podID="678d314c-7803-457a-8acc-cb557eb1e797" containerID="e83f8b1e898d6930fdeeab10dc06cd880510c23df5f60fef789caf7d84d2816f" exitCode=0 Dec 02 11:29:01 crc kubenswrapper[4813]: I1202 11:29:01.368640 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdn2m" event={"ID":"678d314c-7803-457a-8acc-cb557eb1e797","Type":"ContainerDied","Data":"e83f8b1e898d6930fdeeab10dc06cd880510c23df5f60fef789caf7d84d2816f"} Dec 02 11:29:01 crc kubenswrapper[4813]: I1202 11:29:01.368917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdn2m" event={"ID":"678d314c-7803-457a-8acc-cb557eb1e797","Type":"ContainerStarted","Data":"7ec0b9be134245c626935ee02fa86b2935785ac59db811850bf2bb055e8cc017"} Dec 02 11:29:02 crc kubenswrapper[4813]: I1202 11:29:02.380948 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdn2m" event={"ID":"678d314c-7803-457a-8acc-cb557eb1e797","Type":"ContainerStarted","Data":"8115a3ef065f9a47e820e0dfa62fd3f73e813a1ac29ae0042788649903efa4ed"} Dec 02 11:29:03 crc kubenswrapper[4813]: I1202 11:29:03.391716 4813 generic.go:334] "Generic (PLEG): container finished" podID="678d314c-7803-457a-8acc-cb557eb1e797" containerID="8115a3ef065f9a47e820e0dfa62fd3f73e813a1ac29ae0042788649903efa4ed" exitCode=0 Dec 02 11:29:03 crc kubenswrapper[4813]: I1202 11:29:03.391824 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdn2m" event={"ID":"678d314c-7803-457a-8acc-cb557eb1e797","Type":"ContainerDied","Data":"8115a3ef065f9a47e820e0dfa62fd3f73e813a1ac29ae0042788649903efa4ed"} Dec 02 11:29:04 crc kubenswrapper[4813]: I1202 11:29:04.401116 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdn2m" event={"ID":"678d314c-7803-457a-8acc-cb557eb1e797","Type":"ContainerStarted","Data":"51ad156bd4a4d289640016edd2f7dcd5a208c3756df09bae330541d6ae8a9006"} Dec 02 11:29:04 crc kubenswrapper[4813]: I1202 11:29:04.428971 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kdn2m" podStartSLOduration=2.890383087 podStartE2EDuration="5.428953999s" podCreationTimestamp="2025-12-02 11:28:59 +0000 UTC" firstStartedPulling="2025-12-02 11:29:01.371697784 +0000 UTC m=+4865.566872086" lastFinishedPulling="2025-12-02 11:29:03.910268696 +0000 UTC m=+4868.105442998" observedRunningTime="2025-12-02 11:29:04.424875784 +0000 UTC m=+4868.620050096" watchObservedRunningTime="2025-12-02 11:29:04.428953999 +0000 UTC m=+4868.624128301" Dec 02 11:29:10 crc kubenswrapper[4813]: I1202 11:29:10.215273 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:10 crc kubenswrapper[4813]: I1202 11:29:10.216540 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:10 crc kubenswrapper[4813]: I1202 11:29:10.300335 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:10 crc kubenswrapper[4813]: I1202 11:29:10.513799 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:10 crc kubenswrapper[4813]: I1202 11:29:10.563767 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdn2m"] Dec 02 11:29:12 crc kubenswrapper[4813]: I1202 11:29:12.477977 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kdn2m" podUID="678d314c-7803-457a-8acc-cb557eb1e797" containerName="registry-server" containerID="cri-o://51ad156bd4a4d289640016edd2f7dcd5a208c3756df09bae330541d6ae8a9006" gracePeriod=2 Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.490842 4813 generic.go:334] "Generic (PLEG): container finished" podID="678d314c-7803-457a-8acc-cb557eb1e797" containerID="51ad156bd4a4d289640016edd2f7dcd5a208c3756df09bae330541d6ae8a9006" exitCode=0 Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.491207 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdn2m" event={"ID":"678d314c-7803-457a-8acc-cb557eb1e797","Type":"ContainerDied","Data":"51ad156bd4a4d289640016edd2f7dcd5a208c3756df09bae330541d6ae8a9006"} Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.593726 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.625922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbtrh\" (UniqueName: \"kubernetes.io/projected/678d314c-7803-457a-8acc-cb557eb1e797-kube-api-access-lbtrh\") pod \"678d314c-7803-457a-8acc-cb557eb1e797\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.626187 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-catalog-content\") pod \"678d314c-7803-457a-8acc-cb557eb1e797\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.626310 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-utilities\") pod \"678d314c-7803-457a-8acc-cb557eb1e797\" (UID: \"678d314c-7803-457a-8acc-cb557eb1e797\") " Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.627004 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-utilities" (OuterVolumeSpecName: "utilities") pod "678d314c-7803-457a-8acc-cb557eb1e797" (UID: "678d314c-7803-457a-8acc-cb557eb1e797"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.627924 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.639207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678d314c-7803-457a-8acc-cb557eb1e797-kube-api-access-lbtrh" (OuterVolumeSpecName: "kube-api-access-lbtrh") pod "678d314c-7803-457a-8acc-cb557eb1e797" (UID: "678d314c-7803-457a-8acc-cb557eb1e797"). InnerVolumeSpecName "kube-api-access-lbtrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.677650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "678d314c-7803-457a-8acc-cb557eb1e797" (UID: "678d314c-7803-457a-8acc-cb557eb1e797"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.729411 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d314c-7803-457a-8acc-cb557eb1e797-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:29:13 crc kubenswrapper[4813]: I1202 11:29:13.729446 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbtrh\" (UniqueName: \"kubernetes.io/projected/678d314c-7803-457a-8acc-cb557eb1e797-kube-api-access-lbtrh\") on node \"crc\" DevicePath \"\"" Dec 02 11:29:14 crc kubenswrapper[4813]: I1202 11:29:14.504455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdn2m" event={"ID":"678d314c-7803-457a-8acc-cb557eb1e797","Type":"ContainerDied","Data":"7ec0b9be134245c626935ee02fa86b2935785ac59db811850bf2bb055e8cc017"} Dec 02 11:29:14 crc kubenswrapper[4813]: I1202 11:29:14.504972 4813 scope.go:117] "RemoveContainer" containerID="51ad156bd4a4d289640016edd2f7dcd5a208c3756df09bae330541d6ae8a9006" Dec 02 11:29:14 crc kubenswrapper[4813]: I1202 11:29:14.504537 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdn2m" Dec 02 11:29:14 crc kubenswrapper[4813]: I1202 11:29:14.537553 4813 scope.go:117] "RemoveContainer" containerID="8115a3ef065f9a47e820e0dfa62fd3f73e813a1ac29ae0042788649903efa4ed" Dec 02 11:29:14 crc kubenswrapper[4813]: I1202 11:29:14.541064 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdn2m"] Dec 02 11:29:14 crc kubenswrapper[4813]: I1202 11:29:14.567242 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kdn2m"] Dec 02 11:29:14 crc kubenswrapper[4813]: I1202 11:29:14.573289 4813 scope.go:117] "RemoveContainer" containerID="e83f8b1e898d6930fdeeab10dc06cd880510c23df5f60fef789caf7d84d2816f" Dec 02 11:29:16 crc kubenswrapper[4813]: I1202 11:29:16.079083 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678d314c-7803-457a-8acc-cb557eb1e797" path="/var/lib/kubelet/pods/678d314c-7803-457a-8acc-cb557eb1e797/volumes" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.273272 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.273931 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.888404 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cm57q"] Dec 02 11:29:34 crc kubenswrapper[4813]: E1202 11:29:34.889185 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678d314c-7803-457a-8acc-cb557eb1e797" containerName="extract-content" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.889207 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="678d314c-7803-457a-8acc-cb557eb1e797" containerName="extract-content" Dec 02 11:29:34 crc kubenswrapper[4813]: E1202 11:29:34.889244 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678d314c-7803-457a-8acc-cb557eb1e797" containerName="extract-utilities" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.889255 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="678d314c-7803-457a-8acc-cb557eb1e797" containerName="extract-utilities" Dec 02 11:29:34 crc kubenswrapper[4813]: E1202 11:29:34.889278 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678d314c-7803-457a-8acc-cb557eb1e797" containerName="registry-server" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.889286 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="678d314c-7803-457a-8acc-cb557eb1e797" containerName="registry-server" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.889526 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="678d314c-7803-457a-8acc-cb557eb1e797" containerName="registry-server" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.899865 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cm57q"] Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.899984 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.941988 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl25d\" (UniqueName: \"kubernetes.io/projected/52f41490-6e75-4b40-839b-4535f3393068-kube-api-access-cl25d\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.942303 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-catalog-content\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:34 crc kubenswrapper[4813]: I1202 11:29:34.942339 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-utilities\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:35 crc kubenswrapper[4813]: I1202 11:29:35.043379 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-catalog-content\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:35 crc kubenswrapper[4813]: I1202 11:29:35.043687 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-utilities\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:35 crc kubenswrapper[4813]: I1202 11:29:35.043823 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl25d\" (UniqueName: \"kubernetes.io/projected/52f41490-6e75-4b40-839b-4535f3393068-kube-api-access-cl25d\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:35 crc kubenswrapper[4813]: I1202 11:29:35.044376 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-utilities\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:35 crc kubenswrapper[4813]: I1202 11:29:35.044393 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-catalog-content\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:35 crc kubenswrapper[4813]: I1202 11:29:35.071775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl25d\" (UniqueName: \"kubernetes.io/projected/52f41490-6e75-4b40-839b-4535f3393068-kube-api-access-cl25d\") pod \"redhat-operators-cm57q\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:35 crc kubenswrapper[4813]: I1202 11:29:35.236485 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:35 crc kubenswrapper[4813]: I1202 11:29:35.757452 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cm57q"] Dec 02 11:29:36 crc kubenswrapper[4813]: I1202 11:29:36.148707 4813 generic.go:334] "Generic (PLEG): container finished" podID="52f41490-6e75-4b40-839b-4535f3393068" containerID="d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c" exitCode=0 Dec 02 11:29:36 crc kubenswrapper[4813]: I1202 11:29:36.148820 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm57q" event={"ID":"52f41490-6e75-4b40-839b-4535f3393068","Type":"ContainerDied","Data":"d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c"} Dec 02 11:29:36 crc kubenswrapper[4813]: I1202 11:29:36.149063 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm57q" event={"ID":"52f41490-6e75-4b40-839b-4535f3393068","Type":"ContainerStarted","Data":"46565694287f4a0b28b1960d27ee59ff91757c194ab79a126f24ba20af516763"} Dec 02 11:29:36 crc kubenswrapper[4813]: I1202 11:29:36.151165 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:29:37 crc kubenswrapper[4813]: I1202 11:29:37.159437 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm57q" event={"ID":"52f41490-6e75-4b40-839b-4535f3393068","Type":"ContainerStarted","Data":"321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a"} Dec 02 11:29:39 crc kubenswrapper[4813]: I1202 11:29:39.182911 4813 generic.go:334] "Generic (PLEG): container finished" podID="52f41490-6e75-4b40-839b-4535f3393068" containerID="321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a" exitCode=0 Dec 02 11:29:39 crc kubenswrapper[4813]: I1202 11:29:39.183011 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm57q" event={"ID":"52f41490-6e75-4b40-839b-4535f3393068","Type":"ContainerDied","Data":"321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a"} Dec 02 11:29:40 crc kubenswrapper[4813]: I1202 11:29:40.202888 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm57q" event={"ID":"52f41490-6e75-4b40-839b-4535f3393068","Type":"ContainerStarted","Data":"9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256"} Dec 02 11:29:40 crc kubenswrapper[4813]: I1202 11:29:40.228031 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cm57q" podStartSLOduration=2.52914256 podStartE2EDuration="6.228011188s" podCreationTimestamp="2025-12-02 11:29:34 +0000 UTC" firstStartedPulling="2025-12-02 11:29:36.15076766 +0000 UTC m=+4900.345941982" lastFinishedPulling="2025-12-02 11:29:39.849636308 +0000 UTC m=+4904.044810610" observedRunningTime="2025-12-02 11:29:40.225201689 +0000 UTC m=+4904.420376011" watchObservedRunningTime="2025-12-02 11:29:40.228011188 +0000 UTC m=+4904.423185500" Dec 02 11:29:45 crc kubenswrapper[4813]: I1202 11:29:45.237212 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:45 crc kubenswrapper[4813]: I1202 11:29:45.237931 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:46 crc kubenswrapper[4813]: I1202 11:29:46.313977 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cm57q" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="registry-server" probeResult="failure" output=< Dec 02 11:29:46 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 11:29:46 crc kubenswrapper[4813]: > Dec 02 11:29:55 crc kubenswrapper[4813]: I1202 11:29:55.285442 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:55 crc kubenswrapper[4813]: I1202 11:29:55.363524 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:55 crc kubenswrapper[4813]: I1202 11:29:55.535717 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cm57q"] Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.365128 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cm57q" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="registry-server" containerID="cri-o://9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256" gracePeriod=2 Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.851118 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.900260 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-utilities\") pod \"52f41490-6e75-4b40-839b-4535f3393068\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.900443 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-catalog-content\") pod \"52f41490-6e75-4b40-839b-4535f3393068\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.900494 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl25d\" (UniqueName: \"kubernetes.io/projected/52f41490-6e75-4b40-839b-4535f3393068-kube-api-access-cl25d\") pod \"52f41490-6e75-4b40-839b-4535f3393068\" (UID: \"52f41490-6e75-4b40-839b-4535f3393068\") " Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.901903 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-utilities" (OuterVolumeSpecName: "utilities") pod "52f41490-6e75-4b40-839b-4535f3393068" (UID: "52f41490-6e75-4b40-839b-4535f3393068"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.902477 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.909192 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f41490-6e75-4b40-839b-4535f3393068-kube-api-access-cl25d" (OuterVolumeSpecName: "kube-api-access-cl25d") pod "52f41490-6e75-4b40-839b-4535f3393068" (UID: "52f41490-6e75-4b40-839b-4535f3393068"). InnerVolumeSpecName "kube-api-access-cl25d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:29:56 crc kubenswrapper[4813]: I1202 11:29:56.999027 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52f41490-6e75-4b40-839b-4535f3393068" (UID: "52f41490-6e75-4b40-839b-4535f3393068"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.005399 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52f41490-6e75-4b40-839b-4535f3393068-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.005464 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl25d\" (UniqueName: \"kubernetes.io/projected/52f41490-6e75-4b40-839b-4535f3393068-kube-api-access-cl25d\") on node \"crc\" DevicePath \"\"" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.381941 4813 generic.go:334] "Generic (PLEG): container finished" podID="52f41490-6e75-4b40-839b-4535f3393068" containerID="9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256" exitCode=0 Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.381989 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm57q" event={"ID":"52f41490-6e75-4b40-839b-4535f3393068","Type":"ContainerDied","Data":"9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256"} Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.382018 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cm57q" event={"ID":"52f41490-6e75-4b40-839b-4535f3393068","Type":"ContainerDied","Data":"46565694287f4a0b28b1960d27ee59ff91757c194ab79a126f24ba20af516763"} Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.382038 4813 scope.go:117] "RemoveContainer" containerID="9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.382109 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cm57q" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.429451 4813 scope.go:117] "RemoveContainer" containerID="321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.455526 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cm57q"] Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.465114 4813 scope.go:117] "RemoveContainer" containerID="d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.466622 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cm57q"] Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.534211 4813 scope.go:117] "RemoveContainer" containerID="9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256" Dec 02 11:29:57 crc kubenswrapper[4813]: E1202 11:29:57.535137 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256\": container with ID starting with 9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256 not found: ID does not exist" containerID="9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.535246 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256"} err="failed to get container status \"9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256\": rpc error: code = NotFound desc = could not find container \"9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256\": container with ID starting with 9dc3a48821b0f8cb230a402543091fd174a5eb75e1971ba224cec4743d175256 not found: ID does not exist" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.535332 4813 scope.go:117] "RemoveContainer" containerID="321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a" Dec 02 11:29:57 crc kubenswrapper[4813]: E1202 11:29:57.535895 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a\": container with ID starting with 321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a not found: ID does not exist" containerID="321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.535972 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a"} err="failed to get container status \"321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a\": rpc error: code = NotFound desc = could not find container \"321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a\": container with ID starting with 321cfb28304b3c8591bb3afc4612667cdafbe901053d6f33346f189bf273c27a not found: ID does not exist" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.536015 4813 scope.go:117] "RemoveContainer" containerID="d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c" Dec 02 11:29:57 crc kubenswrapper[4813]: E1202 11:29:57.537041 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c\": container with ID starting with d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c not found: ID does not exist" containerID="d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c" Dec 02 11:29:57 crc kubenswrapper[4813]: I1202 11:29:57.537065 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c"} err="failed to get container status \"d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c\": rpc error: code = NotFound desc = could not find container \"d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c\": container with ID starting with d2f66cf7ad64fd46d601c107fd8bd9830ca7a814b07773999519d6780e56f34c not found: ID does not exist" Dec 02 11:29:58 crc kubenswrapper[4813]: I1202 11:29:58.078663 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f41490-6e75-4b40-839b-4535f3393068" path="/var/lib/kubelet/pods/52f41490-6e75-4b40-839b-4535f3393068/volumes" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.184335 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g"] Dec 02 11:30:00 crc kubenswrapper[4813]: E1202 11:30:00.186479 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="extract-utilities" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.186522 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="extract-utilities" Dec 02 11:30:00 crc kubenswrapper[4813]: E1202 11:30:00.186582 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="extract-content" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.186595 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="extract-content" Dec 02 11:30:00 crc kubenswrapper[4813]: E1202 11:30:00.186620 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="registry-server" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.186629 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="registry-server" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.187019 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f41490-6e75-4b40-839b-4535f3393068" containerName="registry-server" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.188573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.192211 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.192647 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.196301 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g"] Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.269923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/030fb7d8-8858-48f1-b675-be3eb0e802f2-config-volume\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.270195 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfj7m\" (UniqueName: \"kubernetes.io/projected/030fb7d8-8858-48f1-b675-be3eb0e802f2-kube-api-access-mfj7m\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.270319 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/030fb7d8-8858-48f1-b675-be3eb0e802f2-secret-volume\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.371719 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfj7m\" (UniqueName: \"kubernetes.io/projected/030fb7d8-8858-48f1-b675-be3eb0e802f2-kube-api-access-mfj7m\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.371822 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/030fb7d8-8858-48f1-b675-be3eb0e802f2-secret-volume\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.371882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/030fb7d8-8858-48f1-b675-be3eb0e802f2-config-volume\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.373005 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/030fb7d8-8858-48f1-b675-be3eb0e802f2-config-volume\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.379955 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/030fb7d8-8858-48f1-b675-be3eb0e802f2-secret-volume\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.388608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfj7m\" (UniqueName: \"kubernetes.io/projected/030fb7d8-8858-48f1-b675-be3eb0e802f2-kube-api-access-mfj7m\") pod \"collect-profiles-29411250-9zd6g\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:00 crc kubenswrapper[4813]: I1202 11:30:00.529347 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:01 crc kubenswrapper[4813]: I1202 11:30:01.005305 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g"] Dec 02 11:30:01 crc kubenswrapper[4813]: I1202 11:30:01.425553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" event={"ID":"030fb7d8-8858-48f1-b675-be3eb0e802f2","Type":"ContainerStarted","Data":"ded82674dbdbf81512d772676df004b7e7ba3da6730b0bc453b05d5e08f674e3"} Dec 02 11:30:01 crc kubenswrapper[4813]: I1202 11:30:01.425863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" event={"ID":"030fb7d8-8858-48f1-b675-be3eb0e802f2","Type":"ContainerStarted","Data":"ef43e32856062f7919675f2b0e5fd84cd0d3d6a93a7e576d6915c783549e4ad6"} Dec 02 11:30:01 crc kubenswrapper[4813]: I1202 11:30:01.452360 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" podStartSLOduration=1.4523387030000001 podStartE2EDuration="1.452338703s" podCreationTimestamp="2025-12-02 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:30:01.444403398 +0000 UTC m=+4925.639577700" watchObservedRunningTime="2025-12-02 11:30:01.452338703 +0000 UTC m=+4925.647513005" Dec 02 11:30:02 crc kubenswrapper[4813]: I1202 11:30:02.440712 4813 generic.go:334] "Generic (PLEG): container finished" podID="030fb7d8-8858-48f1-b675-be3eb0e802f2" containerID="ded82674dbdbf81512d772676df004b7e7ba3da6730b0bc453b05d5e08f674e3" exitCode=0 Dec 02 11:30:02 crc kubenswrapper[4813]: I1202 11:30:02.440801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" event={"ID":"030fb7d8-8858-48f1-b675-be3eb0e802f2","Type":"ContainerDied","Data":"ded82674dbdbf81512d772676df004b7e7ba3da6730b0bc453b05d5e08f674e3"} Dec 02 11:30:03 crc kubenswrapper[4813]: I1202 11:30:03.879996 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:03 crc kubenswrapper[4813]: I1202 11:30:03.953617 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/030fb7d8-8858-48f1-b675-be3eb0e802f2-config-volume\") pod \"030fb7d8-8858-48f1-b675-be3eb0e802f2\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " Dec 02 11:30:03 crc kubenswrapper[4813]: I1202 11:30:03.953696 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/030fb7d8-8858-48f1-b675-be3eb0e802f2-secret-volume\") pod \"030fb7d8-8858-48f1-b675-be3eb0e802f2\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " Dec 02 11:30:03 crc kubenswrapper[4813]: I1202 11:30:03.954027 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfj7m\" (UniqueName: \"kubernetes.io/projected/030fb7d8-8858-48f1-b675-be3eb0e802f2-kube-api-access-mfj7m\") pod \"030fb7d8-8858-48f1-b675-be3eb0e802f2\" (UID: \"030fb7d8-8858-48f1-b675-be3eb0e802f2\") " Dec 02 11:30:03 crc kubenswrapper[4813]: I1202 11:30:03.954200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030fb7d8-8858-48f1-b675-be3eb0e802f2-config-volume" (OuterVolumeSpecName: "config-volume") pod "030fb7d8-8858-48f1-b675-be3eb0e802f2" (UID: "030fb7d8-8858-48f1-b675-be3eb0e802f2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:30:03 crc kubenswrapper[4813]: I1202 11:30:03.954575 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/030fb7d8-8858-48f1-b675-be3eb0e802f2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:30:03 crc kubenswrapper[4813]: I1202 11:30:03.962378 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030fb7d8-8858-48f1-b675-be3eb0e802f2-kube-api-access-mfj7m" (OuterVolumeSpecName: "kube-api-access-mfj7m") pod "030fb7d8-8858-48f1-b675-be3eb0e802f2" (UID: "030fb7d8-8858-48f1-b675-be3eb0e802f2"). InnerVolumeSpecName "kube-api-access-mfj7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:30:03 crc kubenswrapper[4813]: I1202 11:30:03.963096 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/030fb7d8-8858-48f1-b675-be3eb0e802f2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "030fb7d8-8858-48f1-b675-be3eb0e802f2" (UID: "030fb7d8-8858-48f1-b675-be3eb0e802f2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.056142 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfj7m\" (UniqueName: \"kubernetes.io/projected/030fb7d8-8858-48f1-b675-be3eb0e802f2-kube-api-access-mfj7m\") on node \"crc\" DevicePath \"\"" Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.056196 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/030fb7d8-8858-48f1-b675-be3eb0e802f2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.274470 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.274560 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.462348 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" event={"ID":"030fb7d8-8858-48f1-b675-be3eb0e802f2","Type":"ContainerDied","Data":"ef43e32856062f7919675f2b0e5fd84cd0d3d6a93a7e576d6915c783549e4ad6"} Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.462727 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef43e32856062f7919675f2b0e5fd84cd0d3d6a93a7e576d6915c783549e4ad6" Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.462418 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411250-9zd6g" Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.540789 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6"] Dec 02 11:30:04 crc kubenswrapper[4813]: I1202 11:30:04.549455 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411205-9kpq6"] Dec 02 11:30:06 crc kubenswrapper[4813]: I1202 11:30:06.085951 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7473fd60-f8fe-4e62-bc39-1147d6a57f71" path="/var/lib/kubelet/pods/7473fd60-f8fe-4e62-bc39-1147d6a57f71/volumes" Dec 02 11:30:07 crc kubenswrapper[4813]: I1202 11:30:07.196360 4813 scope.go:117] "RemoveContainer" containerID="9b7bd818ccd689be9838893be514e98b4b8a1c0acc2fa16d4b6cd0505744adee" Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.273866 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.274515 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.274565 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.275495 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.275569 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" gracePeriod=600 Dec 02 11:30:34 crc kubenswrapper[4813]: E1202 11:30:34.395238 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.752933 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" exitCode=0 Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.752976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1"} Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.753010 4813 scope.go:117] "RemoveContainer" containerID="fc66c69d2ed116d9c2c499dd1870a147dfbb225a57f72561832da8893726569c" Dec 02 11:30:34 crc kubenswrapper[4813]: I1202 11:30:34.754181 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:30:34 crc kubenswrapper[4813]: E1202 11:30:34.754894 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:30:46 crc kubenswrapper[4813]: I1202 11:30:46.075491 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:30:46 crc kubenswrapper[4813]: E1202 11:30:46.076527 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:31:00 crc kubenswrapper[4813]: I1202 11:31:00.069038 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:31:00 crc kubenswrapper[4813]: E1202 11:31:00.070066 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:31:13 crc kubenswrapper[4813]: I1202 11:31:13.068552 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:31:13 crc kubenswrapper[4813]: E1202 11:31:13.069206 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:31:25 crc kubenswrapper[4813]: I1202 11:31:25.069204 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:31:25 crc kubenswrapper[4813]: E1202 11:31:25.070277 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:31:38 crc kubenswrapper[4813]: I1202 11:31:38.068185 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:31:38 crc kubenswrapper[4813]: E1202 11:31:38.068947 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:31:52 crc kubenswrapper[4813]: I1202 11:31:52.069126 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:31:52 crc kubenswrapper[4813]: E1202 11:31:52.070461 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:32:04 crc kubenswrapper[4813]: I1202 11:32:04.068388 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:32:04 crc kubenswrapper[4813]: E1202 11:32:04.069603 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:32:19 crc kubenswrapper[4813]: I1202 11:32:19.068355 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:32:19 crc kubenswrapper[4813]: E1202 11:32:19.069233 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:32:31 crc kubenswrapper[4813]: I1202 11:32:31.067419 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:32:31 crc kubenswrapper[4813]: E1202 11:32:31.068556 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:32:42 crc kubenswrapper[4813]: I1202 11:32:42.070655 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:32:42 crc kubenswrapper[4813]: E1202 11:32:42.072023 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:32:53 crc kubenswrapper[4813]: I1202 11:32:53.069654 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:32:53 crc kubenswrapper[4813]: E1202 11:32:53.070219 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:33:06 crc kubenswrapper[4813]: I1202 11:33:06.073572 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:33:06 crc kubenswrapper[4813]: E1202 11:33:06.075151 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:33:17 crc kubenswrapper[4813]: I1202 11:33:17.068531 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:33:17 crc kubenswrapper[4813]: E1202 11:33:17.069226 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.715569 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2v58"] Dec 02 11:33:27 crc kubenswrapper[4813]: E1202 11:33:27.716510 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030fb7d8-8858-48f1-b675-be3eb0e802f2" containerName="collect-profiles" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.716526 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="030fb7d8-8858-48f1-b675-be3eb0e802f2" containerName="collect-profiles" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.716764 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="030fb7d8-8858-48f1-b675-be3eb0e802f2" containerName="collect-profiles" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.718484 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.727555 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2v58"] Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.871974 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-utilities\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.872035 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-catalog-content\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.872059 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qrxp\" (UniqueName: \"kubernetes.io/projected/81b10d8f-fdb7-4b87-b450-d409ae6862c3-kube-api-access-8qrxp\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.974580 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-utilities\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.974651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-catalog-content\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.974678 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qrxp\" (UniqueName: \"kubernetes.io/projected/81b10d8f-fdb7-4b87-b450-d409ae6862c3-kube-api-access-8qrxp\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.975120 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-utilities\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:27 crc kubenswrapper[4813]: I1202 11:33:27.975249 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-catalog-content\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:28 crc kubenswrapper[4813]: I1202 11:33:28.003738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qrxp\" (UniqueName: \"kubernetes.io/projected/81b10d8f-fdb7-4b87-b450-d409ae6862c3-kube-api-access-8qrxp\") pod \"community-operators-w2v58\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:28 crc kubenswrapper[4813]: I1202 11:33:28.047929 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:28 crc kubenswrapper[4813]: I1202 11:33:28.624966 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2v58"] Dec 02 11:33:29 crc kubenswrapper[4813]: I1202 11:33:29.471864 4813 generic.go:334] "Generic (PLEG): container finished" podID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerID="be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb" exitCode=0 Dec 02 11:33:29 crc kubenswrapper[4813]: I1202 11:33:29.471932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2v58" event={"ID":"81b10d8f-fdb7-4b87-b450-d409ae6862c3","Type":"ContainerDied","Data":"be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb"} Dec 02 11:33:29 crc kubenswrapper[4813]: I1202 11:33:29.472378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2v58" event={"ID":"81b10d8f-fdb7-4b87-b450-d409ae6862c3","Type":"ContainerStarted","Data":"bc5eddfd2a57d34bc3df0554080de740e04050693c8a733fb6cd4d6eed5ae2b2"} Dec 02 11:33:30 crc kubenswrapper[4813]: I1202 11:33:30.486929 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2v58" event={"ID":"81b10d8f-fdb7-4b87-b450-d409ae6862c3","Type":"ContainerStarted","Data":"15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21"} Dec 02 11:33:31 crc kubenswrapper[4813]: I1202 11:33:31.500940 4813 generic.go:334] "Generic (PLEG): container finished" podID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerID="15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21" exitCode=0 Dec 02 11:33:31 crc kubenswrapper[4813]: I1202 11:33:31.501288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2v58" event={"ID":"81b10d8f-fdb7-4b87-b450-d409ae6862c3","Type":"ContainerDied","Data":"15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21"} Dec 02 11:33:32 crc kubenswrapper[4813]: I1202 11:33:32.068648 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:33:32 crc kubenswrapper[4813]: E1202 11:33:32.069459 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:33:32 crc kubenswrapper[4813]: I1202 11:33:32.510896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2v58" event={"ID":"81b10d8f-fdb7-4b87-b450-d409ae6862c3","Type":"ContainerStarted","Data":"82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516"} Dec 02 11:33:32 crc kubenswrapper[4813]: I1202 11:33:32.537339 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2v58" podStartSLOduration=2.92391617 podStartE2EDuration="5.537322395s" podCreationTimestamp="2025-12-02 11:33:27 +0000 UTC" firstStartedPulling="2025-12-02 11:33:29.474522603 +0000 UTC m=+5133.669696905" lastFinishedPulling="2025-12-02 11:33:32.087928828 +0000 UTC m=+5136.283103130" observedRunningTime="2025-12-02 11:33:32.528833324 +0000 UTC m=+5136.724007646" watchObservedRunningTime="2025-12-02 11:33:32.537322395 +0000 UTC m=+5136.732496697" Dec 02 11:33:38 crc kubenswrapper[4813]: I1202 11:33:38.048497 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:38 crc kubenswrapper[4813]: I1202 11:33:38.050306 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:38 crc kubenswrapper[4813]: I1202 11:33:38.097233 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:38 crc kubenswrapper[4813]: I1202 11:33:38.660930 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:38 crc kubenswrapper[4813]: I1202 11:33:38.729533 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2v58"] Dec 02 11:33:40 crc kubenswrapper[4813]: I1202 11:33:40.594156 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2v58" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerName="registry-server" containerID="cri-o://82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516" gracePeriod=2 Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.411090 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.537743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-catalog-content\") pod \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.537881 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-utilities\") pod \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.538063 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qrxp\" (UniqueName: \"kubernetes.io/projected/81b10d8f-fdb7-4b87-b450-d409ae6862c3-kube-api-access-8qrxp\") pod \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\" (UID: \"81b10d8f-fdb7-4b87-b450-d409ae6862c3\") " Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.539735 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-utilities" (OuterVolumeSpecName: "utilities") pod "81b10d8f-fdb7-4b87-b450-d409ae6862c3" (UID: "81b10d8f-fdb7-4b87-b450-d409ae6862c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.547212 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b10d8f-fdb7-4b87-b450-d409ae6862c3-kube-api-access-8qrxp" (OuterVolumeSpecName: "kube-api-access-8qrxp") pod "81b10d8f-fdb7-4b87-b450-d409ae6862c3" (UID: "81b10d8f-fdb7-4b87-b450-d409ae6862c3"). InnerVolumeSpecName "kube-api-access-8qrxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.603088 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81b10d8f-fdb7-4b87-b450-d409ae6862c3" (UID: "81b10d8f-fdb7-4b87-b450-d409ae6862c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.610916 4813 generic.go:334] "Generic (PLEG): container finished" podID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerID="82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516" exitCode=0 Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.610986 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2v58" event={"ID":"81b10d8f-fdb7-4b87-b450-d409ae6862c3","Type":"ContainerDied","Data":"82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516"} Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.611051 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2v58" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.611065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2v58" event={"ID":"81b10d8f-fdb7-4b87-b450-d409ae6862c3","Type":"ContainerDied","Data":"bc5eddfd2a57d34bc3df0554080de740e04050693c8a733fb6cd4d6eed5ae2b2"} Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.611097 4813 scope.go:117] "RemoveContainer" containerID="82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.639908 4813 scope.go:117] "RemoveContainer" containerID="15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.645547 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.645601 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qrxp\" (UniqueName: \"kubernetes.io/projected/81b10d8f-fdb7-4b87-b450-d409ae6862c3-kube-api-access-8qrxp\") on node \"crc\" DevicePath \"\"" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.645615 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b10d8f-fdb7-4b87-b450-d409ae6862c3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.647906 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2v58"] Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.661124 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2v58"] Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.667124 4813 scope.go:117] "RemoveContainer" containerID="be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.701795 4813 scope.go:117] "RemoveContainer" containerID="82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516" Dec 02 11:33:41 crc kubenswrapper[4813]: E1202 11:33:41.702351 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516\": container with ID starting with 82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516 not found: ID does not exist" containerID="82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.702425 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516"} err="failed to get container status \"82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516\": rpc error: code = NotFound desc = could not find container \"82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516\": container with ID starting with 82f9b4e507c961ce3a6a645399f8504f7997ba1a94a0ab38a381fd382a16e516 not found: ID does not exist" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.702489 4813 scope.go:117] "RemoveContainer" containerID="15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21" Dec 02 11:33:41 crc kubenswrapper[4813]: E1202 11:33:41.702987 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21\": container with ID starting with 15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21 not found: ID does not exist" containerID="15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.703037 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21"} err="failed to get container status \"15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21\": rpc error: code = NotFound desc = could not find container \"15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21\": container with ID starting with 15600b5727d5a2c288dc2241bab7481227614159b8f8bd48af0c94e52ceecb21 not found: ID does not exist" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.703081 4813 scope.go:117] "RemoveContainer" containerID="be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb" Dec 02 11:33:41 crc kubenswrapper[4813]: E1202 11:33:41.703470 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb\": container with ID starting with be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb not found: ID does not exist" containerID="be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb" Dec 02 11:33:41 crc kubenswrapper[4813]: I1202 11:33:41.703509 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb"} err="failed to get container status \"be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb\": rpc error: code = NotFound desc = could not find container \"be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb\": container with ID starting with be10c39868dd3ffbf40d13c362c7cc93eaf222cf9e18d91a90fbe9f4632640bb not found: ID does not exist" Dec 02 11:33:42 crc kubenswrapper[4813]: I1202 11:33:42.088287 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" path="/var/lib/kubelet/pods/81b10d8f-fdb7-4b87-b450-d409ae6862c3/volumes" Dec 02 11:33:44 crc kubenswrapper[4813]: I1202 11:33:44.067810 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:33:44 crc kubenswrapper[4813]: E1202 11:33:44.068519 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:33:56 crc kubenswrapper[4813]: I1202 11:33:56.074505 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:33:56 crc kubenswrapper[4813]: E1202 11:33:56.075395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:34:11 crc kubenswrapper[4813]: I1202 11:34:11.068280 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:34:11 crc kubenswrapper[4813]: E1202 11:34:11.069043 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:34:23 crc kubenswrapper[4813]: I1202 11:34:23.068616 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:34:23 crc kubenswrapper[4813]: E1202 11:34:23.069175 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:34:38 crc kubenswrapper[4813]: I1202 11:34:38.068216 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:34:38 crc kubenswrapper[4813]: E1202 11:34:38.069340 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:34:52 crc kubenswrapper[4813]: I1202 11:34:52.068474 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:34:52 crc kubenswrapper[4813]: E1202 11:34:52.069651 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.861587 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5cdr5"] Dec 02 11:34:53 crc kubenswrapper[4813]: E1202 11:34:53.865173 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerName="registry-server" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.865665 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerName="registry-server" Dec 02 11:34:53 crc kubenswrapper[4813]: E1202 11:34:53.865849 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerName="extract-content" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.865974 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerName="extract-content" Dec 02 11:34:53 crc kubenswrapper[4813]: E1202 11:34:53.866139 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerName="extract-utilities" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.866257 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerName="extract-utilities" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.866681 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b10d8f-fdb7-4b87-b450-d409ae6862c3" containerName="registry-server" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.869003 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.886870 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cdr5"] Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.932640 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrc6t\" (UniqueName: \"kubernetes.io/projected/92f28c23-0b78-4573-9fa6-f499a3090f51-kube-api-access-rrc6t\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.932703 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-utilities\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:53 crc kubenswrapper[4813]: I1202 11:34:53.932725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-catalog-content\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:54 crc kubenswrapper[4813]: I1202 11:34:54.035320 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrc6t\" (UniqueName: \"kubernetes.io/projected/92f28c23-0b78-4573-9fa6-f499a3090f51-kube-api-access-rrc6t\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:54 crc kubenswrapper[4813]: I1202 11:34:54.035392 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-utilities\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:54 crc kubenswrapper[4813]: I1202 11:34:54.035417 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-catalog-content\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:54 crc kubenswrapper[4813]: I1202 11:34:54.035990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-catalog-content\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:54 crc kubenswrapper[4813]: I1202 11:34:54.035999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-utilities\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:54 crc kubenswrapper[4813]: I1202 11:34:54.296903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrc6t\" (UniqueName: \"kubernetes.io/projected/92f28c23-0b78-4573-9fa6-f499a3090f51-kube-api-access-rrc6t\") pod \"redhat-marketplace-5cdr5\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:54 crc kubenswrapper[4813]: I1202 11:34:54.490637 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:34:54 crc kubenswrapper[4813]: I1202 11:34:54.937497 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cdr5"] Dec 02 11:34:55 crc kubenswrapper[4813]: I1202 11:34:55.314205 4813 generic.go:334] "Generic (PLEG): container finished" podID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerID="2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a" exitCode=0 Dec 02 11:34:55 crc kubenswrapper[4813]: I1202 11:34:55.314333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cdr5" event={"ID":"92f28c23-0b78-4573-9fa6-f499a3090f51","Type":"ContainerDied","Data":"2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a"} Dec 02 11:34:55 crc kubenswrapper[4813]: I1202 11:34:55.314887 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cdr5" event={"ID":"92f28c23-0b78-4573-9fa6-f499a3090f51","Type":"ContainerStarted","Data":"cf5fb7d1998fd6337541757c75724c27b5c8f0a82c55d1e65429af941f9850df"} Dec 02 11:34:55 crc kubenswrapper[4813]: I1202 11:34:55.316197 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:34:57 crc kubenswrapper[4813]: I1202 11:34:57.333344 4813 generic.go:334] "Generic (PLEG): container finished" podID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerID="844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b" exitCode=0 Dec 02 11:34:57 crc kubenswrapper[4813]: I1202 11:34:57.333429 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cdr5" event={"ID":"92f28c23-0b78-4573-9fa6-f499a3090f51","Type":"ContainerDied","Data":"844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b"} Dec 02 11:34:58 crc kubenswrapper[4813]: I1202 11:34:58.343722 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cdr5" event={"ID":"92f28c23-0b78-4573-9fa6-f499a3090f51","Type":"ContainerStarted","Data":"154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f"} Dec 02 11:34:58 crc kubenswrapper[4813]: I1202 11:34:58.366441 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5cdr5" podStartSLOduration=2.775415498 podStartE2EDuration="5.366424307s" podCreationTimestamp="2025-12-02 11:34:53 +0000 UTC" firstStartedPulling="2025-12-02 11:34:55.315917114 +0000 UTC m=+5219.511091416" lastFinishedPulling="2025-12-02 11:34:57.906925923 +0000 UTC m=+5222.102100225" observedRunningTime="2025-12-02 11:34:58.362424333 +0000 UTC m=+5222.557598635" watchObservedRunningTime="2025-12-02 11:34:58.366424307 +0000 UTC m=+5222.561598609" Dec 02 11:35:04 crc kubenswrapper[4813]: I1202 11:35:04.491010 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:35:04 crc kubenswrapper[4813]: I1202 11:35:04.491800 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:35:04 crc kubenswrapper[4813]: I1202 11:35:04.551827 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:35:05 crc kubenswrapper[4813]: I1202 11:35:05.067986 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:35:05 crc kubenswrapper[4813]: E1202 11:35:05.068330 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:35:05 crc kubenswrapper[4813]: I1202 11:35:05.737902 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:35:05 crc kubenswrapper[4813]: I1202 11:35:05.787575 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cdr5"] Dec 02 11:35:07 crc kubenswrapper[4813]: I1202 11:35:07.442955 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5cdr5" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerName="registry-server" containerID="cri-o://154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f" gracePeriod=2 Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.188128 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.315533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrc6t\" (UniqueName: \"kubernetes.io/projected/92f28c23-0b78-4573-9fa6-f499a3090f51-kube-api-access-rrc6t\") pod \"92f28c23-0b78-4573-9fa6-f499a3090f51\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.315576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-catalog-content\") pod \"92f28c23-0b78-4573-9fa6-f499a3090f51\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.315622 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-utilities\") pod \"92f28c23-0b78-4573-9fa6-f499a3090f51\" (UID: \"92f28c23-0b78-4573-9fa6-f499a3090f51\") " Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.316768 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-utilities" (OuterVolumeSpecName: "utilities") pod "92f28c23-0b78-4573-9fa6-f499a3090f51" (UID: "92f28c23-0b78-4573-9fa6-f499a3090f51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.321118 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f28c23-0b78-4573-9fa6-f499a3090f51-kube-api-access-rrc6t" (OuterVolumeSpecName: "kube-api-access-rrc6t") pod "92f28c23-0b78-4573-9fa6-f499a3090f51" (UID: "92f28c23-0b78-4573-9fa6-f499a3090f51"). InnerVolumeSpecName "kube-api-access-rrc6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.337620 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92f28c23-0b78-4573-9fa6-f499a3090f51" (UID: "92f28c23-0b78-4573-9fa6-f499a3090f51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.417548 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrc6t\" (UniqueName: \"kubernetes.io/projected/92f28c23-0b78-4573-9fa6-f499a3090f51-kube-api-access-rrc6t\") on node \"crc\" DevicePath \"\"" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.417577 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.417586 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f28c23-0b78-4573-9fa6-f499a3090f51-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.453235 4813 generic.go:334] "Generic (PLEG): container finished" podID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerID="154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f" exitCode=0 Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.453285 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cdr5" event={"ID":"92f28c23-0b78-4573-9fa6-f499a3090f51","Type":"ContainerDied","Data":"154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f"} Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.453310 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cdr5" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.453331 4813 scope.go:117] "RemoveContainer" containerID="154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.453318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cdr5" event={"ID":"92f28c23-0b78-4573-9fa6-f499a3090f51","Type":"ContainerDied","Data":"cf5fb7d1998fd6337541757c75724c27b5c8f0a82c55d1e65429af941f9850df"} Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.476350 4813 scope.go:117] "RemoveContainer" containerID="844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.490606 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cdr5"] Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.501040 4813 scope.go:117] "RemoveContainer" containerID="2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.507049 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cdr5"] Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.552928 4813 scope.go:117] "RemoveContainer" containerID="154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f" Dec 02 11:35:08 crc kubenswrapper[4813]: E1202 11:35:08.554023 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f\": container with ID starting with 154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f not found: ID does not exist" containerID="154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.554169 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f"} err="failed to get container status \"154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f\": rpc error: code = NotFound desc = could not find container \"154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f\": container with ID starting with 154e5fece678750515dc47c5c15a881c4eb2b330e3b6897bfb203d42683a0c9f not found: ID does not exist" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.554266 4813 scope.go:117] "RemoveContainer" containerID="844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b" Dec 02 11:35:08 crc kubenswrapper[4813]: E1202 11:35:08.554729 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b\": container with ID starting with 844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b not found: ID does not exist" containerID="844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.554809 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b"} err="failed to get container status \"844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b\": rpc error: code = NotFound desc = could not find container \"844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b\": container with ID starting with 844c4a62d43c4364db2470dd72550ae3593c794f07adccb4a79729836f0a860b not found: ID does not exist" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.554877 4813 scope.go:117] "RemoveContainer" containerID="2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a" Dec 02 11:35:08 crc kubenswrapper[4813]: E1202 11:35:08.555380 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a\": container with ID starting with 2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a not found: ID does not exist" containerID="2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a" Dec 02 11:35:08 crc kubenswrapper[4813]: I1202 11:35:08.555413 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a"} err="failed to get container status \"2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a\": rpc error: code = NotFound desc = could not find container \"2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a\": container with ID starting with 2c5bec8f279af53f00987a08c63a902c263775048233e9a1348e8d2025197f2a not found: ID does not exist" Dec 02 11:35:10 crc kubenswrapper[4813]: I1202 11:35:10.086622 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" path="/var/lib/kubelet/pods/92f28c23-0b78-4573-9fa6-f499a3090f51/volumes" Dec 02 11:35:20 crc kubenswrapper[4813]: I1202 11:35:20.069451 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:35:20 crc kubenswrapper[4813]: E1202 11:35:20.070305 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:35:32 crc kubenswrapper[4813]: I1202 11:35:32.067695 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:35:32 crc kubenswrapper[4813]: E1202 11:35:32.068642 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:35:43 crc kubenswrapper[4813]: I1202 11:35:43.067866 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:35:43 crc kubenswrapper[4813]: I1202 11:35:43.856782 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"af6188dd090f1d6022234e4677a02bf82d17c87079eb894b786ec7ad5ec942d9"} Dec 02 11:37:34 crc kubenswrapper[4813]: I1202 11:37:34.995128 4813 generic.go:334] "Generic (PLEG): container finished" podID="456994e2-7687-4a9b-be60-d172f26b11e4" containerID="8ef471e31c2042c1c724e49a6f776c768785c44154510e0d18148889195665b5" exitCode=1 Dec 02 11:37:34 crc kubenswrapper[4813]: I1202 11:37:34.995211 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"456994e2-7687-4a9b-be60-d172f26b11e4","Type":"ContainerDied","Data":"8ef471e31c2042c1c724e49a6f776c768785c44154510e0d18148889195665b5"} Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.432119 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.511957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5jlr\" (UniqueName: \"kubernetes.io/projected/456994e2-7687-4a9b-be60-d172f26b11e4-kube-api-access-t5jlr\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.512035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-config-data\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.512087 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.512178 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config-secret\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.512220 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ca-certs\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.512288 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-temporary\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.512317 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ssh-key\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.512831 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.513043 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-workdir\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.513103 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config\") pod \"456994e2-7687-4a9b-be60-d172f26b11e4\" (UID: \"456994e2-7687-4a9b-be60-d172f26b11e4\") " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.513107 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-config-data" (OuterVolumeSpecName: "config-data") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.513678 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.513695 4813 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.519917 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.520389 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.521413 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456994e2-7687-4a9b-be60-d172f26b11e4-kube-api-access-t5jlr" (OuterVolumeSpecName: "kube-api-access-t5jlr") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "kube-api-access-t5jlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.553279 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.563536 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.579490 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.582800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "456994e2-7687-4a9b-be60-d172f26b11e4" (UID: "456994e2-7687-4a9b-be60-d172f26b11e4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.614938 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.614974 4813 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.614983 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/456994e2-7687-4a9b-be60-d172f26b11e4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.614992 4813 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/456994e2-7687-4a9b-be60-d172f26b11e4-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.615005 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/456994e2-7687-4a9b-be60-d172f26b11e4-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.615014 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5jlr\" (UniqueName: \"kubernetes.io/projected/456994e2-7687-4a9b-be60-d172f26b11e4-kube-api-access-t5jlr\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.615046 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.642738 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 02 11:37:36 crc kubenswrapper[4813]: I1202 11:37:36.717205 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 02 11:37:37 crc kubenswrapper[4813]: I1202 11:37:37.025373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"456994e2-7687-4a9b-be60-d172f26b11e4","Type":"ContainerDied","Data":"4b073681918dc493161c9ff759812e427ec4723a9c046a374fd74771dcfb515e"} Dec 02 11:37:37 crc kubenswrapper[4813]: I1202 11:37:37.025410 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b073681918dc493161c9ff759812e427ec4723a9c046a374fd74771dcfb515e" Dec 02 11:37:37 crc kubenswrapper[4813]: I1202 11:37:37.025508 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.267492 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 11:37:39 crc kubenswrapper[4813]: E1202 11:37:39.268014 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerName="registry-server" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.268030 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerName="registry-server" Dec 02 11:37:39 crc kubenswrapper[4813]: E1202 11:37:39.268059 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456994e2-7687-4a9b-be60-d172f26b11e4" containerName="tempest-tests-tempest-tests-runner" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.268098 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="456994e2-7687-4a9b-be60-d172f26b11e4" containerName="tempest-tests-tempest-tests-runner" Dec 02 11:37:39 crc kubenswrapper[4813]: E1202 11:37:39.268136 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerName="extract-utilities" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.268149 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerName="extract-utilities" Dec 02 11:37:39 crc kubenswrapper[4813]: E1202 11:37:39.268165 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerName="extract-content" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.268174 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerName="extract-content" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.268442 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="456994e2-7687-4a9b-be60-d172f26b11e4" containerName="tempest-tests-tempest-tests-runner" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.268473 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f28c23-0b78-4573-9fa6-f499a3090f51" containerName="registry-server" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.269336 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.275636 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rrxgk" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.287014 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.375920 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcnkp\" (UniqueName: \"kubernetes.io/projected/6ac6c8ab-4812-4eae-8168-83aa72675800-kube-api-access-fcnkp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6ac6c8ab-4812-4eae-8168-83aa72675800\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.376016 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6ac6c8ab-4812-4eae-8168-83aa72675800\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.478231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcnkp\" (UniqueName: \"kubernetes.io/projected/6ac6c8ab-4812-4eae-8168-83aa72675800-kube-api-access-fcnkp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6ac6c8ab-4812-4eae-8168-83aa72675800\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.478304 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6ac6c8ab-4812-4eae-8168-83aa72675800\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.478758 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6ac6c8ab-4812-4eae-8168-83aa72675800\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.506576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcnkp\" (UniqueName: \"kubernetes.io/projected/6ac6c8ab-4812-4eae-8168-83aa72675800-kube-api-access-fcnkp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6ac6c8ab-4812-4eae-8168-83aa72675800\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.516051 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6ac6c8ab-4812-4eae-8168-83aa72675800\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:39 crc kubenswrapper[4813]: I1202 11:37:39.607480 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 11:37:40 crc kubenswrapper[4813]: I1202 11:37:40.129418 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 11:37:41 crc kubenswrapper[4813]: I1202 11:37:41.075485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6ac6c8ab-4812-4eae-8168-83aa72675800","Type":"ContainerStarted","Data":"620dd1a1b892f21fd30c7b42db9a02ecfdb31d2b65f4d4298256707de3bcc2a0"} Dec 02 11:37:42 crc kubenswrapper[4813]: I1202 11:37:42.086214 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6ac6c8ab-4812-4eae-8168-83aa72675800","Type":"ContainerStarted","Data":"3726042e11d851b8a705683890ce1c5ca9e1eb8973f495a761b4e0767fe105c1"} Dec 02 11:37:42 crc kubenswrapper[4813]: I1202 11:37:42.112304 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.086081926 podStartE2EDuration="3.112287647s" podCreationTimestamp="2025-12-02 11:37:39 +0000 UTC" firstStartedPulling="2025-12-02 11:37:40.128122881 +0000 UTC m=+5384.323297193" lastFinishedPulling="2025-12-02 11:37:41.154328572 +0000 UTC m=+5385.349502914" observedRunningTime="2025-12-02 11:37:42.103852518 +0000 UTC m=+5386.299026850" watchObservedRunningTime="2025-12-02 11:37:42.112287647 +0000 UTC m=+5386.307461949" Dec 02 11:38:04 crc kubenswrapper[4813]: I1202 11:38:04.274043 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:38:04 crc kubenswrapper[4813]: I1202 11:38:04.274925 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.509817 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p75mk/must-gather-rzmzn"] Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.512731 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.515777 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p75mk"/"openshift-service-ca.crt" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.515856 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-p75mk"/"default-dockercfg-x4cx6" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.515919 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p75mk"/"kube-root-ca.crt" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.522958 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p75mk/must-gather-rzmzn"] Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.648702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-must-gather-output\") pod \"must-gather-rzmzn\" (UID: \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\") " pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.648767 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cgtq\" (UniqueName: \"kubernetes.io/projected/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-kube-api-access-5cgtq\") pod \"must-gather-rzmzn\" (UID: \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\") " pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.751040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-must-gather-output\") pod \"must-gather-rzmzn\" (UID: \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\") " pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.751191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cgtq\" (UniqueName: \"kubernetes.io/projected/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-kube-api-access-5cgtq\") pod \"must-gather-rzmzn\" (UID: \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\") " pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.751522 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-must-gather-output\") pod \"must-gather-rzmzn\" (UID: \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\") " pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.769801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cgtq\" (UniqueName: \"kubernetes.io/projected/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-kube-api-access-5cgtq\") pod \"must-gather-rzmzn\" (UID: \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\") " pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:38:11 crc kubenswrapper[4813]: I1202 11:38:11.838952 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:38:12 crc kubenswrapper[4813]: I1202 11:38:12.350299 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p75mk/must-gather-rzmzn"] Dec 02 11:38:12 crc kubenswrapper[4813]: I1202 11:38:12.412411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/must-gather-rzmzn" event={"ID":"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7","Type":"ContainerStarted","Data":"7f9eadc21be8aaa371a8241787e1eeba1af9e0806753c133022334bf4a6af9dc"} Dec 02 11:38:18 crc kubenswrapper[4813]: I1202 11:38:18.492938 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/must-gather-rzmzn" event={"ID":"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7","Type":"ContainerStarted","Data":"feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711"} Dec 02 11:38:18 crc kubenswrapper[4813]: I1202 11:38:18.493955 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/must-gather-rzmzn" event={"ID":"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7","Type":"ContainerStarted","Data":"a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8"} Dec 02 11:38:18 crc kubenswrapper[4813]: I1202 11:38:18.539731 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p75mk/must-gather-rzmzn" podStartSLOduration=2.7126159530000002 podStartE2EDuration="7.539713176s" podCreationTimestamp="2025-12-02 11:38:11 +0000 UTC" firstStartedPulling="2025-12-02 11:38:12.370251173 +0000 UTC m=+5416.565425485" lastFinishedPulling="2025-12-02 11:38:17.197348376 +0000 UTC m=+5421.392522708" observedRunningTime="2025-12-02 11:38:18.526744048 +0000 UTC m=+5422.721918390" watchObservedRunningTime="2025-12-02 11:38:18.539713176 +0000 UTC m=+5422.734887488" Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.148578 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p75mk/crc-debug-kwjrl"] Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.150524 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.293764 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b99a5e21-5140-4cd1-8229-0c265cf1620b-host\") pod \"crc-debug-kwjrl\" (UID: \"b99a5e21-5140-4cd1-8229-0c265cf1620b\") " pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.293929 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5wsz\" (UniqueName: \"kubernetes.io/projected/b99a5e21-5140-4cd1-8229-0c265cf1620b-kube-api-access-v5wsz\") pod \"crc-debug-kwjrl\" (UID: \"b99a5e21-5140-4cd1-8229-0c265cf1620b\") " pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.395819 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b99a5e21-5140-4cd1-8229-0c265cf1620b-host\") pod \"crc-debug-kwjrl\" (UID: \"b99a5e21-5140-4cd1-8229-0c265cf1620b\") " pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.395915 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5wsz\" (UniqueName: \"kubernetes.io/projected/b99a5e21-5140-4cd1-8229-0c265cf1620b-kube-api-access-v5wsz\") pod \"crc-debug-kwjrl\" (UID: \"b99a5e21-5140-4cd1-8229-0c265cf1620b\") " pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.396448 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b99a5e21-5140-4cd1-8229-0c265cf1620b-host\") pod \"crc-debug-kwjrl\" (UID: \"b99a5e21-5140-4cd1-8229-0c265cf1620b\") " pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.423888 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5wsz\" (UniqueName: \"kubernetes.io/projected/b99a5e21-5140-4cd1-8229-0c265cf1620b-kube-api-access-v5wsz\") pod \"crc-debug-kwjrl\" (UID: \"b99a5e21-5140-4cd1-8229-0c265cf1620b\") " pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:38:22 crc kubenswrapper[4813]: I1202 11:38:22.493256 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:38:22 crc kubenswrapper[4813]: W1202 11:38:22.536882 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb99a5e21_5140_4cd1_8229_0c265cf1620b.slice/crio-9f8539e8de67e07dc39abcdcb8242425836ca9c25c09714a2e43ebe1b5f049cc WatchSource:0}: Error finding container 9f8539e8de67e07dc39abcdcb8242425836ca9c25c09714a2e43ebe1b5f049cc: Status 404 returned error can't find the container with id 9f8539e8de67e07dc39abcdcb8242425836ca9c25c09714a2e43ebe1b5f049cc Dec 02 11:38:23 crc kubenswrapper[4813]: I1202 11:38:23.556637 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/crc-debug-kwjrl" event={"ID":"b99a5e21-5140-4cd1-8229-0c265cf1620b","Type":"ContainerStarted","Data":"9f8539e8de67e07dc39abcdcb8242425836ca9c25c09714a2e43ebe1b5f049cc"} Dec 02 11:38:33 crc kubenswrapper[4813]: I1202 11:38:33.650903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/crc-debug-kwjrl" event={"ID":"b99a5e21-5140-4cd1-8229-0c265cf1620b","Type":"ContainerStarted","Data":"f69e4ed7e4fab7aba24de312a55acf452ff946fc9dac89029f269bc18f3c97c4"} Dec 02 11:38:33 crc kubenswrapper[4813]: I1202 11:38:33.662886 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p75mk/crc-debug-kwjrl" podStartSLOduration=0.918020409 podStartE2EDuration="11.662871935s" podCreationTimestamp="2025-12-02 11:38:22 +0000 UTC" firstStartedPulling="2025-12-02 11:38:22.538633486 +0000 UTC m=+5426.733807788" lastFinishedPulling="2025-12-02 11:38:33.283485012 +0000 UTC m=+5437.478659314" observedRunningTime="2025-12-02 11:38:33.660123847 +0000 UTC m=+5437.855298149" watchObservedRunningTime="2025-12-02 11:38:33.662871935 +0000 UTC m=+5437.858046237" Dec 02 11:38:34 crc kubenswrapper[4813]: I1202 11:38:34.273248 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:38:34 crc kubenswrapper[4813]: I1202 11:38:34.273563 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.273636 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.274740 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.274833 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.276381 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af6188dd090f1d6022234e4677a02bf82d17c87079eb894b786ec7ad5ec942d9"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.276651 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://af6188dd090f1d6022234e4677a02bf82d17c87079eb894b786ec7ad5ec942d9" gracePeriod=600 Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.984725 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="af6188dd090f1d6022234e4677a02bf82d17c87079eb894b786ec7ad5ec942d9" exitCode=0 Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.984820 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"af6188dd090f1d6022234e4677a02bf82d17c87079eb894b786ec7ad5ec942d9"} Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.985422 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034"} Dec 02 11:39:04 crc kubenswrapper[4813]: I1202 11:39:04.985444 4813 scope.go:117] "RemoveContainer" containerID="d6aef7e33921c85e8d4319c8543e2215d3f531a7fab1d60ce48a4e8b2ea650c1" Dec 02 11:39:15 crc kubenswrapper[4813]: I1202 11:39:15.075370 4813 generic.go:334] "Generic (PLEG): container finished" podID="b99a5e21-5140-4cd1-8229-0c265cf1620b" containerID="f69e4ed7e4fab7aba24de312a55acf452ff946fc9dac89029f269bc18f3c97c4" exitCode=0 Dec 02 11:39:15 crc kubenswrapper[4813]: I1202 11:39:15.075488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/crc-debug-kwjrl" event={"ID":"b99a5e21-5140-4cd1-8229-0c265cf1620b","Type":"ContainerDied","Data":"f69e4ed7e4fab7aba24de312a55acf452ff946fc9dac89029f269bc18f3c97c4"} Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.179727 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.211026 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p75mk/crc-debug-kwjrl"] Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.219172 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p75mk/crc-debug-kwjrl"] Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.241663 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5wsz\" (UniqueName: \"kubernetes.io/projected/b99a5e21-5140-4cd1-8229-0c265cf1620b-kube-api-access-v5wsz\") pod \"b99a5e21-5140-4cd1-8229-0c265cf1620b\" (UID: \"b99a5e21-5140-4cd1-8229-0c265cf1620b\") " Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.241942 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b99a5e21-5140-4cd1-8229-0c265cf1620b-host\") pod \"b99a5e21-5140-4cd1-8229-0c265cf1620b\" (UID: \"b99a5e21-5140-4cd1-8229-0c265cf1620b\") " Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.242025 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b99a5e21-5140-4cd1-8229-0c265cf1620b-host" (OuterVolumeSpecName: "host") pod "b99a5e21-5140-4cd1-8229-0c265cf1620b" (UID: "b99a5e21-5140-4cd1-8229-0c265cf1620b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.242557 4813 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b99a5e21-5140-4cd1-8229-0c265cf1620b-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.247108 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99a5e21-5140-4cd1-8229-0c265cf1620b-kube-api-access-v5wsz" (OuterVolumeSpecName: "kube-api-access-v5wsz") pod "b99a5e21-5140-4cd1-8229-0c265cf1620b" (UID: "b99a5e21-5140-4cd1-8229-0c265cf1620b"). InnerVolumeSpecName "kube-api-access-v5wsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:39:16 crc kubenswrapper[4813]: I1202 11:39:16.343405 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5wsz\" (UniqueName: \"kubernetes.io/projected/b99a5e21-5140-4cd1-8229-0c265cf1620b-kube-api-access-v5wsz\") on node \"crc\" DevicePath \"\"" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.098603 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f8539e8de67e07dc39abcdcb8242425836ca9c25c09714a2e43ebe1b5f049cc" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.098678 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-kwjrl" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.500282 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p75mk/crc-debug-mnwjt"] Dec 02 11:39:17 crc kubenswrapper[4813]: E1202 11:39:17.501165 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99a5e21-5140-4cd1-8229-0c265cf1620b" containerName="container-00" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.501184 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99a5e21-5140-4cd1-8229-0c265cf1620b" containerName="container-00" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.501490 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99a5e21-5140-4cd1-8229-0c265cf1620b" containerName="container-00" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.502375 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.572443 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrdp\" (UniqueName: \"kubernetes.io/projected/1f4b1dfc-be49-431a-8e9b-3c40bff99274-kube-api-access-dzrdp\") pod \"crc-debug-mnwjt\" (UID: \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\") " pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.572507 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f4b1dfc-be49-431a-8e9b-3c40bff99274-host\") pod \"crc-debug-mnwjt\" (UID: \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\") " pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.674104 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f4b1dfc-be49-431a-8e9b-3c40bff99274-host\") pod \"crc-debug-mnwjt\" (UID: \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\") " pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.674262 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f4b1dfc-be49-431a-8e9b-3c40bff99274-host\") pod \"crc-debug-mnwjt\" (UID: \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\") " pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.674359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrdp\" (UniqueName: \"kubernetes.io/projected/1f4b1dfc-be49-431a-8e9b-3c40bff99274-kube-api-access-dzrdp\") pod \"crc-debug-mnwjt\" (UID: \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\") " pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.693991 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrdp\" (UniqueName: \"kubernetes.io/projected/1f4b1dfc-be49-431a-8e9b-3c40bff99274-kube-api-access-dzrdp\") pod \"crc-debug-mnwjt\" (UID: \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\") " pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:17 crc kubenswrapper[4813]: I1202 11:39:17.820538 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:18 crc kubenswrapper[4813]: I1202 11:39:18.084608 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99a5e21-5140-4cd1-8229-0c265cf1620b" path="/var/lib/kubelet/pods/b99a5e21-5140-4cd1-8229-0c265cf1620b/volumes" Dec 02 11:39:18 crc kubenswrapper[4813]: I1202 11:39:18.109246 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/crc-debug-mnwjt" event={"ID":"1f4b1dfc-be49-431a-8e9b-3c40bff99274","Type":"ContainerStarted","Data":"f79c5177ccb1ac24473c8f7d5efd757173da2a912b6c13e6f9a50de39d61f8d5"} Dec 02 11:39:18 crc kubenswrapper[4813]: I1202 11:39:18.109307 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/crc-debug-mnwjt" event={"ID":"1f4b1dfc-be49-431a-8e9b-3c40bff99274","Type":"ContainerStarted","Data":"c797220196c61bdd9542ac62fea227db9100607019c397d6afa01b6e72afda57"} Dec 02 11:39:18 crc kubenswrapper[4813]: I1202 11:39:18.132470 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p75mk/crc-debug-mnwjt" podStartSLOduration=1.13245009 podStartE2EDuration="1.13245009s" podCreationTimestamp="2025-12-02 11:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:39:18.125676208 +0000 UTC m=+5482.320850550" watchObservedRunningTime="2025-12-02 11:39:18.13245009 +0000 UTC m=+5482.327624402" Dec 02 11:39:19 crc kubenswrapper[4813]: I1202 11:39:19.119008 4813 generic.go:334] "Generic (PLEG): container finished" podID="1f4b1dfc-be49-431a-8e9b-3c40bff99274" containerID="f79c5177ccb1ac24473c8f7d5efd757173da2a912b6c13e6f9a50de39d61f8d5" exitCode=0 Dec 02 11:39:19 crc kubenswrapper[4813]: I1202 11:39:19.119112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/crc-debug-mnwjt" event={"ID":"1f4b1dfc-be49-431a-8e9b-3c40bff99274","Type":"ContainerDied","Data":"f79c5177ccb1ac24473c8f7d5efd757173da2a912b6c13e6f9a50de39d61f8d5"} Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.228882 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.422875 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f4b1dfc-be49-431a-8e9b-3c40bff99274-host\") pod \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\" (UID: \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\") " Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.422990 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f4b1dfc-be49-431a-8e9b-3c40bff99274-host" (OuterVolumeSpecName: "host") pod "1f4b1dfc-be49-431a-8e9b-3c40bff99274" (UID: "1f4b1dfc-be49-431a-8e9b-3c40bff99274"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.423662 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzrdp\" (UniqueName: \"kubernetes.io/projected/1f4b1dfc-be49-431a-8e9b-3c40bff99274-kube-api-access-dzrdp\") pod \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\" (UID: \"1f4b1dfc-be49-431a-8e9b-3c40bff99274\") " Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.424172 4813 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f4b1dfc-be49-431a-8e9b-3c40bff99274-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.434509 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4b1dfc-be49-431a-8e9b-3c40bff99274-kube-api-access-dzrdp" (OuterVolumeSpecName: "kube-api-access-dzrdp") pod "1f4b1dfc-be49-431a-8e9b-3c40bff99274" (UID: "1f4b1dfc-be49-431a-8e9b-3c40bff99274"). InnerVolumeSpecName "kube-api-access-dzrdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.525541 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzrdp\" (UniqueName: \"kubernetes.io/projected/1f4b1dfc-be49-431a-8e9b-3c40bff99274-kube-api-access-dzrdp\") on node \"crc\" DevicePath \"\"" Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.753118 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p75mk/crc-debug-mnwjt"] Dec 02 11:39:20 crc kubenswrapper[4813]: I1202 11:39:20.760215 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p75mk/crc-debug-mnwjt"] Dec 02 11:39:21 crc kubenswrapper[4813]: I1202 11:39:21.146717 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c797220196c61bdd9542ac62fea227db9100607019c397d6afa01b6e72afda57" Dec 02 11:39:21 crc kubenswrapper[4813]: I1202 11:39:21.146769 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-mnwjt" Dec 02 11:39:21 crc kubenswrapper[4813]: I1202 11:39:21.977356 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p75mk/crc-debug-5vcgt"] Dec 02 11:39:21 crc kubenswrapper[4813]: E1202 11:39:21.978304 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4b1dfc-be49-431a-8e9b-3c40bff99274" containerName="container-00" Dec 02 11:39:21 crc kubenswrapper[4813]: I1202 11:39:21.978325 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4b1dfc-be49-431a-8e9b-3c40bff99274" containerName="container-00" Dec 02 11:39:21 crc kubenswrapper[4813]: I1202 11:39:21.978680 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4b1dfc-be49-431a-8e9b-3c40bff99274" containerName="container-00" Dec 02 11:39:21 crc kubenswrapper[4813]: I1202 11:39:21.979667 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:22 crc kubenswrapper[4813]: I1202 11:39:22.087184 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4b1dfc-be49-431a-8e9b-3c40bff99274" path="/var/lib/kubelet/pods/1f4b1dfc-be49-431a-8e9b-3c40bff99274/volumes" Dec 02 11:39:22 crc kubenswrapper[4813]: I1202 11:39:22.162472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1da77237-989f-49e3-a62e-e5366e2679ec-host\") pod \"crc-debug-5vcgt\" (UID: \"1da77237-989f-49e3-a62e-e5366e2679ec\") " pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:22 crc kubenswrapper[4813]: I1202 11:39:22.162677 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzdc\" (UniqueName: \"kubernetes.io/projected/1da77237-989f-49e3-a62e-e5366e2679ec-kube-api-access-5dzdc\") pod \"crc-debug-5vcgt\" (UID: \"1da77237-989f-49e3-a62e-e5366e2679ec\") " pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:22 crc kubenswrapper[4813]: I1202 11:39:22.266025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1da77237-989f-49e3-a62e-e5366e2679ec-host\") pod \"crc-debug-5vcgt\" (UID: \"1da77237-989f-49e3-a62e-e5366e2679ec\") " pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:22 crc kubenswrapper[4813]: I1202 11:39:22.266342 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dzdc\" (UniqueName: \"kubernetes.io/projected/1da77237-989f-49e3-a62e-e5366e2679ec-kube-api-access-5dzdc\") pod \"crc-debug-5vcgt\" (UID: \"1da77237-989f-49e3-a62e-e5366e2679ec\") " pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:22 crc kubenswrapper[4813]: I1202 11:39:22.266436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1da77237-989f-49e3-a62e-e5366e2679ec-host\") pod \"crc-debug-5vcgt\" (UID: \"1da77237-989f-49e3-a62e-e5366e2679ec\") " pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:22 crc kubenswrapper[4813]: I1202 11:39:22.309560 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dzdc\" (UniqueName: \"kubernetes.io/projected/1da77237-989f-49e3-a62e-e5366e2679ec-kube-api-access-5dzdc\") pod \"crc-debug-5vcgt\" (UID: \"1da77237-989f-49e3-a62e-e5366e2679ec\") " pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:22 crc kubenswrapper[4813]: I1202 11:39:22.600174 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:22 crc kubenswrapper[4813]: W1202 11:39:22.668357 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da77237_989f_49e3_a62e_e5366e2679ec.slice/crio-fca1fe3da4582a3c0ecbdcc12e64cd9c95f125d14357bfae4a5ec815b2acd0a4 WatchSource:0}: Error finding container fca1fe3da4582a3c0ecbdcc12e64cd9c95f125d14357bfae4a5ec815b2acd0a4: Status 404 returned error can't find the container with id fca1fe3da4582a3c0ecbdcc12e64cd9c95f125d14357bfae4a5ec815b2acd0a4 Dec 02 11:39:23 crc kubenswrapper[4813]: I1202 11:39:23.184856 4813 generic.go:334] "Generic (PLEG): container finished" podID="1da77237-989f-49e3-a62e-e5366e2679ec" containerID="b2a8b0e49a8334601341bde2ede64c79dcdfcbdf81f3bd85e5a6f717e81687c1" exitCode=0 Dec 02 11:39:23 crc kubenswrapper[4813]: I1202 11:39:23.185307 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/crc-debug-5vcgt" event={"ID":"1da77237-989f-49e3-a62e-e5366e2679ec","Type":"ContainerDied","Data":"b2a8b0e49a8334601341bde2ede64c79dcdfcbdf81f3bd85e5a6f717e81687c1"} Dec 02 11:39:23 crc kubenswrapper[4813]: I1202 11:39:23.185351 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/crc-debug-5vcgt" event={"ID":"1da77237-989f-49e3-a62e-e5366e2679ec","Type":"ContainerStarted","Data":"fca1fe3da4582a3c0ecbdcc12e64cd9c95f125d14357bfae4a5ec815b2acd0a4"} Dec 02 11:39:23 crc kubenswrapper[4813]: I1202 11:39:23.239599 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p75mk/crc-debug-5vcgt"] Dec 02 11:39:23 crc kubenswrapper[4813]: I1202 11:39:23.252096 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p75mk/crc-debug-5vcgt"] Dec 02 11:39:24 crc kubenswrapper[4813]: I1202 11:39:24.316791 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:24 crc kubenswrapper[4813]: I1202 11:39:24.517357 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dzdc\" (UniqueName: \"kubernetes.io/projected/1da77237-989f-49e3-a62e-e5366e2679ec-kube-api-access-5dzdc\") pod \"1da77237-989f-49e3-a62e-e5366e2679ec\" (UID: \"1da77237-989f-49e3-a62e-e5366e2679ec\") " Dec 02 11:39:24 crc kubenswrapper[4813]: I1202 11:39:24.517719 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1da77237-989f-49e3-a62e-e5366e2679ec-host\") pod \"1da77237-989f-49e3-a62e-e5366e2679ec\" (UID: \"1da77237-989f-49e3-a62e-e5366e2679ec\") " Dec 02 11:39:24 crc kubenswrapper[4813]: I1202 11:39:24.517945 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1da77237-989f-49e3-a62e-e5366e2679ec-host" (OuterVolumeSpecName: "host") pod "1da77237-989f-49e3-a62e-e5366e2679ec" (UID: "1da77237-989f-49e3-a62e-e5366e2679ec"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 11:39:24 crc kubenswrapper[4813]: I1202 11:39:24.518623 4813 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1da77237-989f-49e3-a62e-e5366e2679ec-host\") on node \"crc\" DevicePath \"\"" Dec 02 11:39:24 crc kubenswrapper[4813]: I1202 11:39:24.526347 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da77237-989f-49e3-a62e-e5366e2679ec-kube-api-access-5dzdc" (OuterVolumeSpecName: "kube-api-access-5dzdc") pod "1da77237-989f-49e3-a62e-e5366e2679ec" (UID: "1da77237-989f-49e3-a62e-e5366e2679ec"). InnerVolumeSpecName "kube-api-access-5dzdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:39:24 crc kubenswrapper[4813]: I1202 11:39:24.621102 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dzdc\" (UniqueName: \"kubernetes.io/projected/1da77237-989f-49e3-a62e-e5366e2679ec-kube-api-access-5dzdc\") on node \"crc\" DevicePath \"\"" Dec 02 11:39:25 crc kubenswrapper[4813]: I1202 11:39:25.210444 4813 scope.go:117] "RemoveContainer" containerID="b2a8b0e49a8334601341bde2ede64c79dcdfcbdf81f3bd85e5a6f717e81687c1" Dec 02 11:39:25 crc kubenswrapper[4813]: I1202 11:39:25.210560 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/crc-debug-5vcgt" Dec 02 11:39:26 crc kubenswrapper[4813]: I1202 11:39:26.088835 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da77237-989f-49e3-a62e-e5366e2679ec" path="/var/lib/kubelet/pods/1da77237-989f-49e3-a62e-e5366e2679ec/volumes" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.384354 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mbnzz"] Dec 02 11:40:02 crc kubenswrapper[4813]: E1202 11:40:02.385355 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da77237-989f-49e3-a62e-e5366e2679ec" containerName="container-00" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.385368 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da77237-989f-49e3-a62e-e5366e2679ec" containerName="container-00" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.385567 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da77237-989f-49e3-a62e-e5366e2679ec" containerName="container-00" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.386903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.397627 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mbnzz"] Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.518764 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-utilities\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.519061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2gl\" (UniqueName: \"kubernetes.io/projected/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-kube-api-access-zg2gl\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.519435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-catalog-content\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.621879 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-utilities\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.622184 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg2gl\" (UniqueName: \"kubernetes.io/projected/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-kube-api-access-zg2gl\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.622328 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-catalog-content\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.622529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-utilities\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.622755 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-catalog-content\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.655608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg2gl\" (UniqueName: \"kubernetes.io/projected/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-kube-api-access-zg2gl\") pod \"certified-operators-mbnzz\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:02 crc kubenswrapper[4813]: I1202 11:40:02.714475 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:03 crc kubenswrapper[4813]: I1202 11:40:03.246787 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mbnzz"] Dec 02 11:40:03 crc kubenswrapper[4813]: I1202 11:40:03.670415 4813 generic.go:334] "Generic (PLEG): container finished" podID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerID="130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b" exitCode=0 Dec 02 11:40:03 crc kubenswrapper[4813]: I1202 11:40:03.670677 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbnzz" event={"ID":"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569","Type":"ContainerDied","Data":"130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b"} Dec 02 11:40:03 crc kubenswrapper[4813]: I1202 11:40:03.671022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbnzz" event={"ID":"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569","Type":"ContainerStarted","Data":"de65477afa795b4ca9a74fb9ef20dc2b685c141f5215b76ec6d895aa76cb4a61"} Dec 02 11:40:03 crc kubenswrapper[4813]: I1202 11:40:03.675141 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:40:05 crc kubenswrapper[4813]: I1202 11:40:05.698055 4813 generic.go:334] "Generic (PLEG): container finished" podID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerID="4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c" exitCode=0 Dec 02 11:40:05 crc kubenswrapper[4813]: I1202 11:40:05.698117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbnzz" event={"ID":"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569","Type":"ContainerDied","Data":"4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c"} Dec 02 11:40:06 crc kubenswrapper[4813]: I1202 11:40:06.715338 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbnzz" event={"ID":"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569","Type":"ContainerStarted","Data":"758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204"} Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.004564 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cbdc6498b-rbl7g_03b395a1-3aae-4528-9d4c-3d4dc0413de4/barbican-api/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.169356 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-66d478b798-8tjdd_c66503cc-41b9-44f1-8f42-f65908004aef/barbican-keystone-listener/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.179566 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7cbdc6498b-rbl7g_03b395a1-3aae-4528-9d4c-3d4dc0413de4/barbican-api-log/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.366100 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fbc57bb6c-lbqlm_71d75001-77f3-47d8-822f-c2b72f0d9226/barbican-worker/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.445000 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fbc57bb6c-lbqlm_71d75001-77f3-47d8-822f-c2b72f0d9226/barbican-worker-log/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.448342 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-66d478b798-8tjdd_c66503cc-41b9-44f1-8f42-f65908004aef/barbican-keystone-listener-log/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.644609 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd503396-3ca3-46ca-850c-51717dc92ba4/ceilometer-central-agent/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.693829 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mgjvq_274340d8-d37f-4d14-a810-be92bd373f3f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.844669 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd503396-3ca3-46ca-850c-51717dc92ba4/ceilometer-notification-agent/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.898551 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd503396-3ca3-46ca-850c-51717dc92ba4/sg-core/0.log" Dec 02 11:40:09 crc kubenswrapper[4813]: I1202 11:40:09.918263 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd503396-3ca3-46ca-850c-51717dc92ba4/proxy-httpd/0.log" Dec 02 11:40:10 crc kubenswrapper[4813]: I1202 11:40:10.027796 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-lt5nr_0d080fa4-1ffb-4c15-beb2-110224e86841/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:10 crc kubenswrapper[4813]: I1202 11:40:10.095843 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-9vt79_2edf3b91-b0f6-4e9b-80c1-e4e3e0386cb7/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:10 crc kubenswrapper[4813]: I1202 11:40:10.714882 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_bf61107f-cf86-48d8-a9db-bde098c122f0/probe/0.log" Dec 02 11:40:11 crc kubenswrapper[4813]: I1202 11:40:11.209219 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_84d8089e-8fae-4958-9b81-ee39f00022b7/cinder-api/0.log" Dec 02 11:40:11 crc kubenswrapper[4813]: I1202 11:40:11.288883 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5db6a7c9-1d30-44e1-8357-5964f5d4cb09/cinder-scheduler/0.log" Dec 02 11:40:11 crc kubenswrapper[4813]: I1202 11:40:11.381485 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_84d8089e-8fae-4958-9b81-ee39f00022b7/cinder-api-log/0.log" Dec 02 11:40:11 crc kubenswrapper[4813]: I1202 11:40:11.438416 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5db6a7c9-1d30-44e1-8357-5964f5d4cb09/probe/0.log" Dec 02 11:40:11 crc kubenswrapper[4813]: I1202 11:40:11.722985 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_6946f6db-913a-4505-b3db-e96e89534a35/probe/0.log" Dec 02 11:40:11 crc kubenswrapper[4813]: I1202 11:40:11.967631 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-zn92g_bf066df0-6f94-4514-9d92-30d252aea2f7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.194652 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rg249_6c99184e-d396-4734-985d-0f4312e5f82b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.387723 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-s24ls_1518b626-2bab-4d9b-8572-f6fae0f49bea/init/0.log" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.581911 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-s24ls_1518b626-2bab-4d9b-8572-f6fae0f49bea/init/0.log" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.713430 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-s24ls_1518b626-2bab-4d9b-8572-f6fae0f49bea/dnsmasq-dns/0.log" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.717140 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.717180 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.734401 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_bf61107f-cf86-48d8-a9db-bde098c122f0/cinder-backup/0.log" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.766970 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_78c39f26-5444-4386-99f7-f672f7554931/glance-httpd/0.log" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.780785 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.802107 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mbnzz" podStartSLOduration=8.35290682 podStartE2EDuration="10.802092358s" podCreationTimestamp="2025-12-02 11:40:02 +0000 UTC" firstStartedPulling="2025-12-02 11:40:03.674399787 +0000 UTC m=+5527.869574129" lastFinishedPulling="2025-12-02 11:40:06.123585355 +0000 UTC m=+5530.318759667" observedRunningTime="2025-12-02 11:40:06.766848673 +0000 UTC m=+5530.962023005" watchObservedRunningTime="2025-12-02 11:40:12.802092358 +0000 UTC m=+5536.997266660" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.829369 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:12 crc kubenswrapper[4813]: I1202 11:40:12.889294 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_78c39f26-5444-4386-99f7-f672f7554931/glance-log/0.log" Dec 02 11:40:13 crc kubenswrapper[4813]: I1202 11:40:13.027613 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mbnzz"] Dec 02 11:40:13 crc kubenswrapper[4813]: I1202 11:40:13.181111 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7b990be9-d837-4418-8909-3b050114af00/glance-httpd/0.log" Dec 02 11:40:13 crc kubenswrapper[4813]: I1202 11:40:13.242168 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7b990be9-d837-4418-8909-3b050114af00/glance-log/0.log" Dec 02 11:40:13 crc kubenswrapper[4813]: I1202 11:40:13.454167 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c5ddb87c8-5vtbk_757d290c-ab26-4557-a758-10924585a86b/horizon/0.log" Dec 02 11:40:13 crc kubenswrapper[4813]: I1202 11:40:13.522593 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-htwnn_485c1630-9c8a-4474-b75c-4bfed04bcea9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:13 crc kubenswrapper[4813]: I1202 11:40:13.690394 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c5ddb87c8-5vtbk_757d290c-ab26-4557-a758-10924585a86b/horizon-log/0.log" Dec 02 11:40:13 crc kubenswrapper[4813]: I1202 11:40:13.702804 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g5ppq_8b306c3a-786a-44f8-83be-75641ead26f3/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:13 crc kubenswrapper[4813]: I1202 11:40:13.928215 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29411221-wc6vj_2c04c0e4-5a90-4287-bcbe-11e190ec6005/keystone-cron/0.log" Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.112147 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4c015ea5-0d0f-4139-8a7e-eb19d478f879/kube-state-metrics/0.log" Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.178190 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-m6tb7_d12b539d-a4ef-4c0d-9770-af7b7543d284/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.540436 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_5aee8529-5e7c-4f43-b683-ada4d72cebe4/manila-api-log/0.log" Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.570978 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_5aee8529-5e7c-4f43-b683-ada4d72cebe4/manila-api/0.log" Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.714420 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-75b688f9cf-8gj5w_6236fd33-cc67-443d-bb34-287b98d8ed72/keystone-api/0.log" Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.767445 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_113670c8-595f-41d3-8af4-47f7cc0a6833/probe/0.log" Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.778785 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mbnzz" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerName="registry-server" containerID="cri-o://758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204" gracePeriod=2 Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.823981 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_113670c8-595f-41d3-8af4-47f7cc0a6833/manila-scheduler/0.log" Dec 02 11:40:14 crc kubenswrapper[4813]: I1202 11:40:14.980812 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_99110d27-ba93-4f75-a898-acf87c7b4f14/probe/0.log" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.013318 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_99110d27-ba93-4f75-a898-acf87c7b4f14/manila-share/0.log" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.205325 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.388053 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-catalog-content\") pod \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.388230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg2gl\" (UniqueName: \"kubernetes.io/projected/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-kube-api-access-zg2gl\") pod \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.388274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-utilities\") pod \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\" (UID: \"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569\") " Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.390109 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-utilities" (OuterVolumeSpecName: "utilities") pod "8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" (UID: "8f1c9ce0-1a0c-4b75-af24-d5f3e789f569"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.396605 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-kube-api-access-zg2gl" (OuterVolumeSpecName: "kube-api-access-zg2gl") pod "8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" (UID: "8f1c9ce0-1a0c-4b75-af24-d5f3e789f569"). InnerVolumeSpecName "kube-api-access-zg2gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.482315 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5764b7874f-mhh86_6edf0036-9e60-413c-9a23-38a0c0e95a84/neutron-httpd/0.log" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.490982 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg2gl\" (UniqueName: \"kubernetes.io/projected/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-kube-api-access-zg2gl\") on node \"crc\" DevicePath \"\"" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.491015 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.507754 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" (UID: "8f1c9ce0-1a0c-4b75-af24-d5f3e789f569"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.592206 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.615134 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5764b7874f-mhh86_6edf0036-9e60-413c-9a23-38a0c0e95a84/neutron-api/0.log" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.727009 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssmn6_a4be7c16-2599-4533-9efd-256afaa43b58/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.787725 4813 generic.go:334] "Generic (PLEG): container finished" podID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerID="758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204" exitCode=0 Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.787762 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbnzz" event={"ID":"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569","Type":"ContainerDied","Data":"758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204"} Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.787787 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mbnzz" event={"ID":"8f1c9ce0-1a0c-4b75-af24-d5f3e789f569","Type":"ContainerDied","Data":"de65477afa795b4ca9a74fb9ef20dc2b685c141f5215b76ec6d895aa76cb4a61"} Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.787803 4813 scope.go:117] "RemoveContainer" containerID="758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.787927 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mbnzz" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.822301 4813 scope.go:117] "RemoveContainer" containerID="4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.823440 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mbnzz"] Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.839646 4813 scope.go:117] "RemoveContainer" containerID="130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.851114 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mbnzz"] Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.898438 4813 scope.go:117] "RemoveContainer" containerID="758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204" Dec 02 11:40:15 crc kubenswrapper[4813]: E1202 11:40:15.899112 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204\": container with ID starting with 758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204 not found: ID does not exist" containerID="758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.899157 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204"} err="failed to get container status \"758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204\": rpc error: code = NotFound desc = could not find container \"758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204\": container with ID starting with 758a2d549bad51b799b146a5de03eceb6b34bed3ced49b2eabe10e4a6e7c0204 not found: ID does not exist" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.899187 4813 scope.go:117] "RemoveContainer" containerID="4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c" Dec 02 11:40:15 crc kubenswrapper[4813]: E1202 11:40:15.899582 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c\": container with ID starting with 4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c not found: ID does not exist" containerID="4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.899623 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c"} err="failed to get container status \"4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c\": rpc error: code = NotFound desc = could not find container \"4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c\": container with ID starting with 4877ab45526ac81cfffadfeaf4f24e9553f9a1938e5f47a38249adf3c86e6c3c not found: ID does not exist" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.899652 4813 scope.go:117] "RemoveContainer" containerID="130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b" Dec 02 11:40:15 crc kubenswrapper[4813]: E1202 11:40:15.899961 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b\": container with ID starting with 130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b not found: ID does not exist" containerID="130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b" Dec 02 11:40:15 crc kubenswrapper[4813]: I1202 11:40:15.899983 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b"} err="failed to get container status \"130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b\": rpc error: code = NotFound desc = could not find container \"130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b\": container with ID starting with 130efd26c824bf241ddf7abd107c88df80f8ac2317c00befd3c2e0dbaab2849b not found: ID does not exist" Dec 02 11:40:16 crc kubenswrapper[4813]: I1202 11:40:16.078012 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" path="/var/lib/kubelet/pods/8f1c9ce0-1a0c-4b75-af24-d5f3e789f569/volumes" Dec 02 11:40:16 crc kubenswrapper[4813]: I1202 11:40:16.441260 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3e2709a9-5caf-4939-b835-4ecbf7d9e865/nova-cell0-conductor-conductor/0.log" Dec 02 11:40:16 crc kubenswrapper[4813]: I1202 11:40:16.616413 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3982e534-d3ba-4867-8e51-2576155575e0/nova-api-log/0.log" Dec 02 11:40:17 crc kubenswrapper[4813]: I1202 11:40:17.000545 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6ef24984-82f3-4b10-997c-3051d8a59c5f/nova-cell1-conductor-conductor/0.log" Dec 02 11:40:17 crc kubenswrapper[4813]: I1202 11:40:17.022584 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3982e534-d3ba-4867-8e51-2576155575e0/nova-api-api/0.log" Dec 02 11:40:17 crc kubenswrapper[4813]: I1202 11:40:17.140437 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_04a01b98-1643-4e76-8fde-e7951d129581/nova-cell1-novncproxy-novncproxy/0.log" Dec 02 11:40:17 crc kubenswrapper[4813]: I1202 11:40:17.597102 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-r58g2_09685150-e1df-4f9e-9780-b44084b88a32/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:17 crc kubenswrapper[4813]: I1202 11:40:17.894938 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0a79c68c-68f6-4ea8-9752-3f5710f6f29b/nova-metadata-log/0.log" Dec 02 11:40:18 crc kubenswrapper[4813]: I1202 11:40:18.211759 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_26f3e7d5-190b-4014-9357-a26896d27248/nova-scheduler-scheduler/0.log" Dec 02 11:40:18 crc kubenswrapper[4813]: I1202 11:40:18.293169 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c5717348-ff61-4e62-9c41-3553228842f9/mysql-bootstrap/0.log" Dec 02 11:40:18 crc kubenswrapper[4813]: I1202 11:40:18.446136 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c5717348-ff61-4e62-9c41-3553228842f9/mysql-bootstrap/0.log" Dec 02 11:40:18 crc kubenswrapper[4813]: I1202 11:40:18.556107 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c5717348-ff61-4e62-9c41-3553228842f9/galera/0.log" Dec 02 11:40:18 crc kubenswrapper[4813]: I1202 11:40:18.785157 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_53e726ce-4b04-4f80-b0a6-20919949a0e6/mysql-bootstrap/0.log" Dec 02 11:40:18 crc kubenswrapper[4813]: I1202 11:40:18.926174 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_53e726ce-4b04-4f80-b0a6-20919949a0e6/mysql-bootstrap/0.log" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.011982 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_53e726ce-4b04-4f80-b0a6-20919949a0e6/galera/0.log" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.188022 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7d9a5a34-8c3d-4ce8-86f7-fd7e859a0c3f/openstackclient/0.log" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.297192 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m665p"] Dec 02 11:40:19 crc kubenswrapper[4813]: E1202 11:40:19.297839 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerName="extract-utilities" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.297857 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerName="extract-utilities" Dec 02 11:40:19 crc kubenswrapper[4813]: E1202 11:40:19.297884 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerName="registry-server" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.297890 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerName="registry-server" Dec 02 11:40:19 crc kubenswrapper[4813]: E1202 11:40:19.297904 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerName="extract-content" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.297911 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerName="extract-content" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.298096 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1c9ce0-1a0c-4b75-af24-d5f3e789f569" containerName="registry-server" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.299397 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.305640 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m665p"] Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.414740 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hfxg4_0ce6e9c3-8bfa-4bea-8b33-497328af7573/ovn-controller/0.log" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.471208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldr79\" (UniqueName: \"kubernetes.io/projected/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-kube-api-access-ldr79\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.471315 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-utilities\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.471342 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-catalog-content\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.572510 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldr79\" (UniqueName: \"kubernetes.io/projected/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-kube-api-access-ldr79\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.572640 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-utilities\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.572670 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-catalog-content\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.573254 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-catalog-content\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.573277 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-utilities\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.594364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldr79\" (UniqueName: \"kubernetes.io/projected/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-kube-api-access-ldr79\") pod \"redhat-operators-m665p\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.644218 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wwqk7_e08b977d-5597-4076-8ea1-21301801b3b1/openstack-network-exporter/0.log" Dec 02 11:40:19 crc kubenswrapper[4813]: I1202 11:40:19.647410 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.030622 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zmgkl_2dbe4376-1955-47b0-9d67-0d2188ef1532/ovsdb-server-init/0.log" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.148013 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m665p"] Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.265235 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zmgkl_2dbe4376-1955-47b0-9d67-0d2188ef1532/ovs-vswitchd/0.log" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.345919 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zmgkl_2dbe4376-1955-47b0-9d67-0d2188ef1532/ovsdb-server-init/0.log" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.364305 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0a79c68c-68f6-4ea8-9752-3f5710f6f29b/nova-metadata-metadata/0.log" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.531404 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zmgkl_2dbe4376-1955-47b0-9d67-0d2188ef1532/ovsdb-server/0.log" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.561663 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_6946f6db-913a-4505-b3db-e96e89534a35/cinder-volume/0.log" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.624113 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ks2vj_8533db13-ff2a-4a5e-8a6e-30dad8252d93/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.865976 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_43a02521-6143-4cfa-89c6-4b7e536990d8/ovn-northd/0.log" Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.866151 4813 generic.go:334] "Generic (PLEG): container finished" podID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerID="a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8" exitCode=0 Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.866186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m665p" event={"ID":"88f92dc4-3269-4aa7-b67c-bbd4feac6a80","Type":"ContainerDied","Data":"a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8"} Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.866213 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m665p" event={"ID":"88f92dc4-3269-4aa7-b67c-bbd4feac6a80","Type":"ContainerStarted","Data":"e85a310c9ab0a69f5eaa2c52a8ab1d38eb884ca4c1c0139d0d66d2bfefa4f378"} Dec 02 11:40:20 crc kubenswrapper[4813]: I1202 11:40:20.892031 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_43a02521-6143-4cfa-89c6-4b7e536990d8/openstack-network-exporter/0.log" Dec 02 11:40:21 crc kubenswrapper[4813]: I1202 11:40:21.260290 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_79685aab-e537-450f-aecc-0768e316bf66/ovsdbserver-nb/0.log" Dec 02 11:40:21 crc kubenswrapper[4813]: I1202 11:40:21.263209 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_79685aab-e537-450f-aecc-0768e316bf66/openstack-network-exporter/0.log" Dec 02 11:40:21 crc kubenswrapper[4813]: I1202 11:40:21.480276 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_71370d19-e630-47a0-a25e-8815ab28d976/openstack-network-exporter/0.log" Dec 02 11:40:21 crc kubenswrapper[4813]: I1202 11:40:21.483386 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_71370d19-e630-47a0-a25e-8815ab28d976/ovsdbserver-sb/0.log" Dec 02 11:40:21 crc kubenswrapper[4813]: I1202 11:40:21.609214 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-58f7b95df4-zpk5x_0ab19173-c03c-4f46-8f2e-550ea7a70fd3/placement-api/0.log" Dec 02 11:40:21 crc kubenswrapper[4813]: I1202 11:40:21.807880 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-58f7b95df4-zpk5x_0ab19173-c03c-4f46-8f2e-550ea7a70fd3/placement-log/0.log" Dec 02 11:40:21 crc kubenswrapper[4813]: I1202 11:40:21.816986 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2541bb4e-08b2-43e0-8142-81f9af449133/setup-container/0.log" Dec 02 11:40:21 crc kubenswrapper[4813]: I1202 11:40:21.940329 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m665p" event={"ID":"88f92dc4-3269-4aa7-b67c-bbd4feac6a80","Type":"ContainerStarted","Data":"d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a"} Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.105931 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2541bb4e-08b2-43e0-8142-81f9af449133/setup-container/0.log" Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.118482 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2541bb4e-08b2-43e0-8142-81f9af449133/rabbitmq/0.log" Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.134264 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e/setup-container/0.log" Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.372840 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e/setup-container/0.log" Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.386119 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5f73dc68-cdbb-43ae-a9ab-0a07fc36ba8e/rabbitmq/0.log" Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.444168 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-br4dz_bb0bcc40-23dd-4088-99d5-d5a69cb4f2e3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.602300 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zjnnc_a2f71156-b569-4254-8cc5-2e38a5ca5edc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.715499 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9577q_545847c6-6495-4189-84ae-d6d6e6f03097/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:22 crc kubenswrapper[4813]: I1202 11:40:22.913529 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ppbvl_9eb11fe4-0504-4a53-a627-a1314b1115c5/ssh-known-hosts-edpm-deployment/0.log" Dec 02 11:40:23 crc kubenswrapper[4813]: I1202 11:40:23.105024 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_456994e2-7687-4a9b-be60-d172f26b11e4/tempest-tests-tempest-tests-runner/0.log" Dec 02 11:40:23 crc kubenswrapper[4813]: I1202 11:40:23.152740 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6ac6c8ab-4812-4eae-8168-83aa72675800/test-operator-logs-container/0.log" Dec 02 11:40:23 crc kubenswrapper[4813]: I1202 11:40:23.357200 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-l292c_818f8423-ecb9-4ec4-a4af-a6d4e9979032/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 11:40:24 crc kubenswrapper[4813]: I1202 11:40:24.989602 4813 generic.go:334] "Generic (PLEG): container finished" podID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerID="d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a" exitCode=0 Dec 02 11:40:24 crc kubenswrapper[4813]: I1202 11:40:24.989659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m665p" event={"ID":"88f92dc4-3269-4aa7-b67c-bbd4feac6a80","Type":"ContainerDied","Data":"d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a"} Dec 02 11:40:26 crc kubenswrapper[4813]: I1202 11:40:26.009287 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m665p" event={"ID":"88f92dc4-3269-4aa7-b67c-bbd4feac6a80","Type":"ContainerStarted","Data":"ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5"} Dec 02 11:40:26 crc kubenswrapper[4813]: I1202 11:40:26.047740 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m665p" podStartSLOduration=2.175983866 podStartE2EDuration="7.047715385s" podCreationTimestamp="2025-12-02 11:40:19 +0000 UTC" firstStartedPulling="2025-12-02 11:40:20.869303406 +0000 UTC m=+5545.064477708" lastFinishedPulling="2025-12-02 11:40:25.741034915 +0000 UTC m=+5549.936209227" observedRunningTime="2025-12-02 11:40:26.04049268 +0000 UTC m=+5550.235666982" watchObservedRunningTime="2025-12-02 11:40:26.047715385 +0000 UTC m=+5550.242889687" Dec 02 11:40:29 crc kubenswrapper[4813]: I1202 11:40:29.649860 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:29 crc kubenswrapper[4813]: I1202 11:40:29.650741 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:30 crc kubenswrapper[4813]: I1202 11:40:30.703962 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m665p" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="registry-server" probeResult="failure" output=< Dec 02 11:40:30 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Dec 02 11:40:30 crc kubenswrapper[4813]: > Dec 02 11:40:35 crc kubenswrapper[4813]: I1202 11:40:35.545160 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_481fa78c-0062-4dc4-b7a6-c8f5845c5480/memcached/0.log" Dec 02 11:40:39 crc kubenswrapper[4813]: I1202 11:40:39.711774 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:39 crc kubenswrapper[4813]: I1202 11:40:39.794188 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:39 crc kubenswrapper[4813]: I1202 11:40:39.956451 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m665p"] Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.158879 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m665p" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="registry-server" containerID="cri-o://ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5" gracePeriod=2 Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.649684 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.771821 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-catalog-content\") pod \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.772027 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldr79\" (UniqueName: \"kubernetes.io/projected/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-kube-api-access-ldr79\") pod \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.772140 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-utilities\") pod \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\" (UID: \"88f92dc4-3269-4aa7-b67c-bbd4feac6a80\") " Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.773314 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-utilities" (OuterVolumeSpecName: "utilities") pod "88f92dc4-3269-4aa7-b67c-bbd4feac6a80" (UID: "88f92dc4-3269-4aa7-b67c-bbd4feac6a80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.780901 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-kube-api-access-ldr79" (OuterVolumeSpecName: "kube-api-access-ldr79") pod "88f92dc4-3269-4aa7-b67c-bbd4feac6a80" (UID: "88f92dc4-3269-4aa7-b67c-bbd4feac6a80"). InnerVolumeSpecName "kube-api-access-ldr79". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.875532 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldr79\" (UniqueName: \"kubernetes.io/projected/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-kube-api-access-ldr79\") on node \"crc\" DevicePath \"\"" Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.875575 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.909515 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88f92dc4-3269-4aa7-b67c-bbd4feac6a80" (UID: "88f92dc4-3269-4aa7-b67c-bbd4feac6a80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:40:41 crc kubenswrapper[4813]: I1202 11:40:41.977700 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f92dc4-3269-4aa7-b67c-bbd4feac6a80-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.189266 4813 generic.go:334] "Generic (PLEG): container finished" podID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerID="ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5" exitCode=0 Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.189288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m665p" event={"ID":"88f92dc4-3269-4aa7-b67c-bbd4feac6a80","Type":"ContainerDied","Data":"ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5"} Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.189404 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m665p" event={"ID":"88f92dc4-3269-4aa7-b67c-bbd4feac6a80","Type":"ContainerDied","Data":"e85a310c9ab0a69f5eaa2c52a8ab1d38eb884ca4c1c0139d0d66d2bfefa4f378"} Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.189445 4813 scope.go:117] "RemoveContainer" containerID="ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.189485 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m665p" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.219377 4813 scope.go:117] "RemoveContainer" containerID="d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.220102 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m665p"] Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.228236 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m665p"] Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.249393 4813 scope.go:117] "RemoveContainer" containerID="a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.301329 4813 scope.go:117] "RemoveContainer" containerID="ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5" Dec 02 11:40:42 crc kubenswrapper[4813]: E1202 11:40:42.301853 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5\": container with ID starting with ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5 not found: ID does not exist" containerID="ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.301905 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5"} err="failed to get container status \"ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5\": rpc error: code = NotFound desc = could not find container \"ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5\": container with ID starting with ad98d8e14e23e621a76a39c8c47aed62fcd7081b910e383022fb5fa74202afa5 not found: ID does not exist" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.301940 4813 scope.go:117] "RemoveContainer" containerID="d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a" Dec 02 11:40:42 crc kubenswrapper[4813]: E1202 11:40:42.302633 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a\": container with ID starting with d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a not found: ID does not exist" containerID="d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.302677 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a"} err="failed to get container status \"d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a\": rpc error: code = NotFound desc = could not find container \"d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a\": container with ID starting with d4dff7cae86a125c4903ca3344d51f081d841abe66136a819bdbedeb935e924a not found: ID does not exist" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.302705 4813 scope.go:117] "RemoveContainer" containerID="a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8" Dec 02 11:40:42 crc kubenswrapper[4813]: E1202 11:40:42.303149 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8\": container with ID starting with a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8 not found: ID does not exist" containerID="a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8" Dec 02 11:40:42 crc kubenswrapper[4813]: I1202 11:40:42.303275 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8"} err="failed to get container status \"a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8\": rpc error: code = NotFound desc = could not find container \"a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8\": container with ID starting with a168e3e084571a8bf9d3c97245cf95f228a1d25724a54f1da53d72b6df17ecb8 not found: ID does not exist" Dec 02 11:40:44 crc kubenswrapper[4813]: I1202 11:40:44.083469 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" path="/var/lib/kubelet/pods/88f92dc4-3269-4aa7-b67c-bbd4feac6a80/volumes" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.009734 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n_2b32306d-7ca9-4dff-8119-7502681bc325/util/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.206938 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n_2b32306d-7ca9-4dff-8119-7502681bc325/pull/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.211808 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n_2b32306d-7ca9-4dff-8119-7502681bc325/util/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.269226 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n_2b32306d-7ca9-4dff-8119-7502681bc325/pull/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.431021 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n_2b32306d-7ca9-4dff-8119-7502681bc325/util/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.443122 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n_2b32306d-7ca9-4dff-8119-7502681bc325/pull/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.452569 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0be9bb188729e02a5597e6f036e8e59f7db8c2bc9ba856b8356643001krc8n_2b32306d-7ca9-4dff-8119-7502681bc325/extract/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.598228 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vh67z_a9b5d3a4-c74a-4dc7-95e7-ce34faf34401/kube-rbac-proxy/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.664171 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vh67z_a9b5d3a4-c74a-4dc7-95e7-ce34faf34401/manager/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.686466 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78c47498c4-pwr72_c78e6c08-10b5-442c-bcc4-96e55238f240/kube-rbac-proxy/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.834643 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-ptlqc_9de86006-d480-4e91-904d-dea58373d496/kube-rbac-proxy/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.840214 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78c47498c4-pwr72_c78e6c08-10b5-442c-bcc4-96e55238f240/manager/0.log" Dec 02 11:40:51 crc kubenswrapper[4813]: I1202 11:40:51.901111 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-ptlqc_9de86006-d480-4e91-904d-dea58373d496/manager/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.023868 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-sxk5l_7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec/kube-rbac-proxy/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.133905 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-sxk5l_7a9b96d0-9a4f-4e67-9b12-94e83e89f4ec/manager/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.238573 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-fd42j_98f2dfc1-669a-430c-a089-859de7ca1688/kube-rbac-proxy/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.312214 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-fd42j_98f2dfc1-669a-430c-a089-859de7ca1688/manager/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.359736 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-525b9_2ba57cac-e437-4de6-a3fa-563d41cd0404/kube-rbac-proxy/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.446063 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-525b9_2ba57cac-e437-4de6-a3fa-563d41cd0404/manager/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.531191 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-2sk2z_baa2abea-8891-4e33-b453-e34dc8e15df7/kube-rbac-proxy/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.703049 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-2sk2z_baa2abea-8891-4e33-b453-e34dc8e15df7/manager/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.723983 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-5n7pc_4da17b88-c060-41ed-ab38-90dc8dd0383e/kube-rbac-proxy/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.752990 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-5n7pc_4da17b88-c060-41ed-ab38-90dc8dd0383e/manager/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.892589 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-dprfp_2f7373b2-cc78-4f73-9ed5-23d0c3144867/kube-rbac-proxy/0.log" Dec 02 11:40:52 crc kubenswrapper[4813]: I1202 11:40:52.946606 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-dprfp_2f7373b2-cc78-4f73-9ed5-23d0c3144867/manager/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.074927 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-8sknj_95242ae1-57e8-436f-9971-66e273b0d75c/kube-rbac-proxy/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.137739 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-8sknj_95242ae1-57e8-436f-9971-66e273b0d75c/manager/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.203722 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-z2z7m_c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4/kube-rbac-proxy/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.321008 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-z2z7m_c4aed6a6-6a6a-424a-bacb-4a5fb1b5ada4/manager/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.357502 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-c5x5p_796ef4ca-26ba-44f0-b23a-c4fd808c5981/kube-rbac-proxy/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.420159 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-c5x5p_796ef4ca-26ba-44f0-b23a-c4fd808c5981/manager/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.537278 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-grz2d_da18c237-cd3d-4116-9373-989eaf92e7cd/kube-rbac-proxy/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.590741 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-grz2d_da18c237-cd3d-4116-9373-989eaf92e7cd/manager/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.717624 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xbnzh_2b41c1b0-929f-4289-b50d-5567c79a26d8/kube-rbac-proxy/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.774805 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xbnzh_2b41c1b0-929f-4289-b50d-5567c79a26d8/manager/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.849594 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg_8e626c15-e204-4729-8c0f-95b7b101ec43/kube-rbac-proxy/0.log" Dec 02 11:40:53 crc kubenswrapper[4813]: I1202 11:40:53.898499 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4x86gg_8e626c15-e204-4729-8c0f-95b7b101ec43/manager/0.log" Dec 02 11:40:54 crc kubenswrapper[4813]: I1202 11:40:54.279105 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-77f7c7f9d7-cjmjd_f6bc9e20-756e-43de-b1f7-e3ffbbcbd219/operator/0.log" Dec 02 11:40:54 crc kubenswrapper[4813]: I1202 11:40:54.303166 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6v8v4_3b20cb4e-064c-45e2-b461-ddb692c11924/registry-server/0.log" Dec 02 11:40:54 crc kubenswrapper[4813]: I1202 11:40:54.557106 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-wj22v_a22bf838-4122-4704-b8a7-d590e3ba5b65/manager/0.log" Dec 02 11:40:54 crc kubenswrapper[4813]: I1202 11:40:54.565388 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-wj22v_a22bf838-4122-4704-b8a7-d590e3ba5b65/kube-rbac-proxy/0.log" Dec 02 11:40:54 crc kubenswrapper[4813]: I1202 11:40:54.679598 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-wxld7_b2092fa1-ae34-44b4-b89f-d2c1407b911a/kube-rbac-proxy/0.log" Dec 02 11:40:54 crc kubenswrapper[4813]: I1202 11:40:54.831605 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-wxld7_b2092fa1-ae34-44b4-b89f-d2c1407b911a/manager/0.log" Dec 02 11:40:54 crc kubenswrapper[4813]: I1202 11:40:54.873121 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lwjwp_afe1c5ed-adc9-4200-b1c0-8938e759daed/operator/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.040029 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-sl4ml_7543bebd-caf8-49db-99ce-fed3b5ac812a/kube-rbac-proxy/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.046624 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-sl4ml_7543bebd-caf8-49db-99ce-fed3b5ac812a/manager/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.082681 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65b4bc588-254sd_80e020ca-18e4-47c4-aaa7-30eba6e9dfd8/manager/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.212603 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-nhfvs_aff40ee1-2e46-4923-8138-09046b9568dd/kube-rbac-proxy/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.292688 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-nhfvs_aff40ee1-2e46-4923-8138-09046b9568dd/manager/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.321350 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-l6sr9_3bed8e3c-64ca-47e0-80b2-ec2f40473db9/kube-rbac-proxy/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.342294 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-l6sr9_3bed8e3c-64ca-47e0-80b2-ec2f40473db9/manager/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.481114 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-bflrb_f1a3ada5-a084-4500-8c1b-a9e6e3008786/manager/0.log" Dec 02 11:40:55 crc kubenswrapper[4813]: I1202 11:40:55.482304 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-bflrb_f1a3ada5-a084-4500-8c1b-a9e6e3008786/kube-rbac-proxy/0.log" Dec 02 11:41:04 crc kubenswrapper[4813]: I1202 11:41:04.273796 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:41:04 crc kubenswrapper[4813]: I1202 11:41:04.274687 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:41:15 crc kubenswrapper[4813]: I1202 11:41:15.100378 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5jfdj_83fd9a6f-d3fc-4e3e-8924-d363bab949eb/control-plane-machine-set-operator/0.log" Dec 02 11:41:15 crc kubenswrapper[4813]: I1202 11:41:15.248660 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4wtmn_d067220e-9800-4c06-b0e2-01d1be8b8986/kube-rbac-proxy/0.log" Dec 02 11:41:15 crc kubenswrapper[4813]: I1202 11:41:15.293657 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4wtmn_d067220e-9800-4c06-b0e2-01d1be8b8986/machine-api-operator/0.log" Dec 02 11:41:30 crc kubenswrapper[4813]: I1202 11:41:30.077587 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-fm8vw_05e6e8d5-45df-4a70-b05c-01e02b33f594/cert-manager-controller/0.log" Dec 02 11:41:30 crc kubenswrapper[4813]: I1202 11:41:30.126901 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-ljd8m_27e3cc05-5212-4270-8ef1-d11c69db84aa/cert-manager-cainjector/0.log" Dec 02 11:41:30 crc kubenswrapper[4813]: I1202 11:41:30.170128 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-qmcmb_b28391b4-11dc-465d-a4e0-de2b65ea8ce6/cert-manager-webhook/0.log" Dec 02 11:41:34 crc kubenswrapper[4813]: I1202 11:41:34.273774 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:41:34 crc kubenswrapper[4813]: I1202 11:41:34.274810 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:41:45 crc kubenswrapper[4813]: I1202 11:41:45.108423 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-tqbbq_00c12883-0ec0-42c9-b7ea-581d4cece1f8/nmstate-console-plugin/0.log" Dec 02 11:41:45 crc kubenswrapper[4813]: I1202 11:41:45.305898 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5zqxz_4b2cee1e-053a-4b13-ae80-cd3932e3cddb/nmstate-handler/0.log" Dec 02 11:41:45 crc kubenswrapper[4813]: I1202 11:41:45.352771 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-tdnqv_382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8/kube-rbac-proxy/0.log" Dec 02 11:41:45 crc kubenswrapper[4813]: I1202 11:41:45.400158 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-tdnqv_382b8d39-bbd0-4d7c-8bee-90f0cd26e0b8/nmstate-metrics/0.log" Dec 02 11:41:45 crc kubenswrapper[4813]: I1202 11:41:45.527601 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-h9kkw_83de8a03-b86b-4d85-9c7e-92d7b56235c5/nmstate-operator/0.log" Dec 02 11:41:45 crc kubenswrapper[4813]: I1202 11:41:45.595622 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-vg66j_f26b552b-d766-4432-aba0-6460e8873c7f/nmstate-webhook/0.log" Dec 02 11:42:01 crc kubenswrapper[4813]: I1202 11:42:01.490991 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rjbv4_d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353/kube-rbac-proxy/0.log" Dec 02 11:42:01 crc kubenswrapper[4813]: I1202 11:42:01.591404 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rjbv4_d6902fe1-ed9a-4cbf-80b2-3b1f31d4d353/controller/0.log" Dec 02 11:42:01 crc kubenswrapper[4813]: I1202 11:42:01.702349 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-frr-files/0.log" Dec 02 11:42:01 crc kubenswrapper[4813]: I1202 11:42:01.881478 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-frr-files/0.log" Dec 02 11:42:01 crc kubenswrapper[4813]: I1202 11:42:01.903572 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-reloader/0.log" Dec 02 11:42:01 crc kubenswrapper[4813]: I1202 11:42:01.927042 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-reloader/0.log" Dec 02 11:42:01 crc kubenswrapper[4813]: I1202 11:42:01.953415 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-metrics/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.121267 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-metrics/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.123818 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-reloader/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.152833 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-metrics/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.187492 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-frr-files/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.403685 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-reloader/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.403949 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-metrics/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.433710 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/cp-frr-files/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.486608 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/controller/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.590634 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/frr-metrics/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.615229 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/kube-rbac-proxy/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.682840 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/kube-rbac-proxy-frr/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.930550 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/reloader/0.log" Dec 02 11:42:02 crc kubenswrapper[4813]: I1202 11:42:02.976530 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-kvvcz_d71d0a99-b714-49c3-abbd-0e9bcf238c38/frr-k8s-webhook-server/0.log" Dec 02 11:42:03 crc kubenswrapper[4813]: I1202 11:42:03.114110 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7ff5887cf9-z4k7r_73fd072b-2a9c-4d81-968f-86bd031430af/manager/0.log" Dec 02 11:42:03 crc kubenswrapper[4813]: I1202 11:42:03.344772 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7c5b954b74-kvm69_4cbe25f7-67c5-46b3-ab14-d91ce76ca3b9/webhook-server/0.log" Dec 02 11:42:03 crc kubenswrapper[4813]: I1202 11:42:03.397243 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2ssx2_ea89adf2-768b-41ee-93f1-1a52803ace65/kube-rbac-proxy/0.log" Dec 02 11:42:03 crc kubenswrapper[4813]: I1202 11:42:03.965000 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2ssx2_ea89adf2-768b-41ee-93f1-1a52803ace65/speaker/0.log" Dec 02 11:42:04 crc kubenswrapper[4813]: I1202 11:42:04.073937 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-46r87_19477a8e-e8a1-43b4-9272-3f1394a27c60/frr/0.log" Dec 02 11:42:04 crc kubenswrapper[4813]: I1202 11:42:04.273404 4813 patch_prober.go:28] interesting pod/machine-config-daemon-4p89g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 11:42:04 crc kubenswrapper[4813]: I1202 11:42:04.273462 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 11:42:04 crc kubenswrapper[4813]: I1202 11:42:04.273506 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" Dec 02 11:42:04 crc kubenswrapper[4813]: I1202 11:42:04.274249 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034"} pod="openshift-machine-config-operator/machine-config-daemon-4p89g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 11:42:04 crc kubenswrapper[4813]: I1202 11:42:04.274304 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" containerName="machine-config-daemon" containerID="cri-o://6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" gracePeriod=600 Dec 02 11:42:04 crc kubenswrapper[4813]: E1202 11:42:04.399970 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:42:05 crc kubenswrapper[4813]: I1202 11:42:05.010479 4813 generic.go:334] "Generic (PLEG): container finished" podID="db121737-190f-4b43-9d79-e96e2dd76080" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" exitCode=0 Dec 02 11:42:05 crc kubenswrapper[4813]: I1202 11:42:05.010517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerDied","Data":"6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034"} Dec 02 11:42:05 crc kubenswrapper[4813]: I1202 11:42:05.010842 4813 scope.go:117] "RemoveContainer" containerID="af6188dd090f1d6022234e4677a02bf82d17c87079eb894b786ec7ad5ec942d9" Dec 02 11:42:05 crc kubenswrapper[4813]: I1202 11:42:05.011650 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:42:05 crc kubenswrapper[4813]: E1202 11:42:05.011991 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:42:17 crc kubenswrapper[4813]: I1202 11:42:17.067637 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:42:17 crc kubenswrapper[4813]: E1202 11:42:17.068337 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:42:18 crc kubenswrapper[4813]: I1202 11:42:18.405068 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747_a761174c-6e39-48ec-9b14-1200fbc2daf3/util/0.log" Dec 02 11:42:18 crc kubenswrapper[4813]: I1202 11:42:18.598858 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747_a761174c-6e39-48ec-9b14-1200fbc2daf3/util/0.log" Dec 02 11:42:18 crc kubenswrapper[4813]: I1202 11:42:18.709361 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747_a761174c-6e39-48ec-9b14-1200fbc2daf3/pull/0.log" Dec 02 11:42:18 crc kubenswrapper[4813]: I1202 11:42:18.792826 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747_a761174c-6e39-48ec-9b14-1200fbc2daf3/pull/0.log" Dec 02 11:42:18 crc kubenswrapper[4813]: I1202 11:42:18.857868 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747_a761174c-6e39-48ec-9b14-1200fbc2daf3/pull/0.log" Dec 02 11:42:18 crc kubenswrapper[4813]: I1202 11:42:18.861743 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747_a761174c-6e39-48ec-9b14-1200fbc2daf3/util/0.log" Dec 02 11:42:18 crc kubenswrapper[4813]: I1202 11:42:18.865454 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fbm747_a761174c-6e39-48ec-9b14-1200fbc2daf3/extract/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.036016 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr_a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef/util/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.228210 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr_a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef/pull/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.270087 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr_a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef/pull/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.311398 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr_a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef/util/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.422162 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr_a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef/pull/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.660398 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr_a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef/util/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.677004 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gkjjr_a99c63c9-afc9-4ec0-aded-7c41f9d7e9ef/extract/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.758639 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mzfpv_72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377/extract-utilities/0.log" Dec 02 11:42:19 crc kubenswrapper[4813]: I1202 11:42:19.993369 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mzfpv_72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377/extract-content/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.019373 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mzfpv_72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377/extract-content/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.022478 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mzfpv_72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377/extract-utilities/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.172085 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mzfpv_72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377/extract-utilities/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.189459 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mzfpv_72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377/extract-content/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.391167 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkqp5_1e7aa9d6-692d-4237-8923-b947d4fab022/extract-utilities/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.593891 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkqp5_1e7aa9d6-692d-4237-8923-b947d4fab022/extract-content/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.649905 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkqp5_1e7aa9d6-692d-4237-8923-b947d4fab022/extract-content/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.666959 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkqp5_1e7aa9d6-692d-4237-8923-b947d4fab022/extract-utilities/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.823935 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkqp5_1e7aa9d6-692d-4237-8923-b947d4fab022/extract-utilities/0.log" Dec 02 11:42:20 crc kubenswrapper[4813]: I1202 11:42:20.831410 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkqp5_1e7aa9d6-692d-4237-8923-b947d4fab022/extract-content/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.011789 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mzfpv_72ac4e4b-7f95-4caa-8a51-8ecc3eaa5377/registry-server/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.060208 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-t8lnq_c9f56a82-0f75-40b8-8bf2-97c83422abbb/marketplace-operator/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.269221 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pl2xf_28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45/extract-utilities/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.371958 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fkqp5_1e7aa9d6-692d-4237-8923-b947d4fab022/registry-server/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.420160 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pl2xf_28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45/extract-content/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.504092 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pl2xf_28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45/extract-utilities/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.554855 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pl2xf_28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45/extract-content/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.742196 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pl2xf_28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45/extract-content/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.742227 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pl2xf_28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45/extract-utilities/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.919441 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pl2xf_28fcfbf0-52fc-4dd2-92ce-da5e6a3a5b45/registry-server/0.log" Dec 02 11:42:21 crc kubenswrapper[4813]: I1202 11:42:21.955998 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kjt9x_015756fa-cf90-4fa0-a1a4-075fa8b36171/extract-utilities/0.log" Dec 02 11:42:22 crc kubenswrapper[4813]: I1202 11:42:22.084977 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kjt9x_015756fa-cf90-4fa0-a1a4-075fa8b36171/extract-utilities/0.log" Dec 02 11:42:22 crc kubenswrapper[4813]: I1202 11:42:22.114445 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kjt9x_015756fa-cf90-4fa0-a1a4-075fa8b36171/extract-content/0.log" Dec 02 11:42:22 crc kubenswrapper[4813]: I1202 11:42:22.119652 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kjt9x_015756fa-cf90-4fa0-a1a4-075fa8b36171/extract-content/0.log" Dec 02 11:42:22 crc kubenswrapper[4813]: I1202 11:42:22.305182 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kjt9x_015756fa-cf90-4fa0-a1a4-075fa8b36171/extract-utilities/0.log" Dec 02 11:42:22 crc kubenswrapper[4813]: I1202 11:42:22.311598 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kjt9x_015756fa-cf90-4fa0-a1a4-075fa8b36171/extract-content/0.log" Dec 02 11:42:22 crc kubenswrapper[4813]: I1202 11:42:22.864581 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kjt9x_015756fa-cf90-4fa0-a1a4-075fa8b36171/registry-server/0.log" Dec 02 11:42:31 crc kubenswrapper[4813]: I1202 11:42:31.068859 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:42:31 crc kubenswrapper[4813]: E1202 11:42:31.069942 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:42:43 crc kubenswrapper[4813]: I1202 11:42:43.068138 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:42:43 crc kubenswrapper[4813]: E1202 11:42:43.068958 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:42:54 crc kubenswrapper[4813]: I1202 11:42:54.068735 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:42:54 crc kubenswrapper[4813]: E1202 11:42:54.069380 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:43:09 crc kubenswrapper[4813]: I1202 11:43:09.068635 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:43:09 crc kubenswrapper[4813]: E1202 11:43:09.070789 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:43:24 crc kubenswrapper[4813]: I1202 11:43:24.068587 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:43:24 crc kubenswrapper[4813]: E1202 11:43:24.069683 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:43:36 crc kubenswrapper[4813]: I1202 11:43:36.075417 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:43:36 crc kubenswrapper[4813]: E1202 11:43:36.077939 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:43:50 crc kubenswrapper[4813]: I1202 11:43:50.069344 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:43:50 crc kubenswrapper[4813]: E1202 11:43:50.070514 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:44:02 crc kubenswrapper[4813]: I1202 11:44:02.068177 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:44:02 crc kubenswrapper[4813]: E1202 11:44:02.069246 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:44:14 crc kubenswrapper[4813]: I1202 11:44:14.072678 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:44:14 crc kubenswrapper[4813]: E1202 11:44:14.074170 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.351511 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qh8lk"] Dec 02 11:44:17 crc kubenswrapper[4813]: E1202 11:44:17.353790 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="registry-server" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.354131 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="registry-server" Dec 02 11:44:17 crc kubenswrapper[4813]: E1202 11:44:17.354344 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="extract-content" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.354472 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="extract-content" Dec 02 11:44:17 crc kubenswrapper[4813]: E1202 11:44:17.356225 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="extract-utilities" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.356435 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="extract-utilities" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.356936 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f92dc4-3269-4aa7-b67c-bbd4feac6a80" containerName="registry-server" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.359692 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.373294 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qh8lk"] Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.507687 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-catalog-content\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.508807 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-utilities\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.509480 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799pq\" (UniqueName: \"kubernetes.io/projected/1724807b-ca59-43aa-9810-1f4a9a1bd02f-kube-api-access-799pq\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.610469 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-799pq\" (UniqueName: \"kubernetes.io/projected/1724807b-ca59-43aa-9810-1f4a9a1bd02f-kube-api-access-799pq\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.610558 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-catalog-content\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.610665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-utilities\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.611058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-catalog-content\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.611120 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-utilities\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.639833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-799pq\" (UniqueName: \"kubernetes.io/projected/1724807b-ca59-43aa-9810-1f4a9a1bd02f-kube-api-access-799pq\") pod \"community-operators-qh8lk\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:17 crc kubenswrapper[4813]: I1202 11:44:17.723762 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:18 crc kubenswrapper[4813]: I1202 11:44:18.314443 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qh8lk"] Dec 02 11:44:18 crc kubenswrapper[4813]: I1202 11:44:18.468376 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh8lk" event={"ID":"1724807b-ca59-43aa-9810-1f4a9a1bd02f","Type":"ContainerStarted","Data":"20a36e3710708fa94ca356cde1addd14d98a57d0088913b0364a0953a6a7f9dd"} Dec 02 11:44:19 crc kubenswrapper[4813]: I1202 11:44:19.486278 4813 generic.go:334] "Generic (PLEG): container finished" podID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerID="2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3" exitCode=0 Dec 02 11:44:19 crc kubenswrapper[4813]: I1202 11:44:19.486348 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh8lk" event={"ID":"1724807b-ca59-43aa-9810-1f4a9a1bd02f","Type":"ContainerDied","Data":"2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3"} Dec 02 11:44:20 crc kubenswrapper[4813]: I1202 11:44:20.507769 4813 generic.go:334] "Generic (PLEG): container finished" podID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerID="feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711" exitCode=0 Dec 02 11:44:20 crc kubenswrapper[4813]: I1202 11:44:20.507847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p75mk/must-gather-rzmzn" event={"ID":"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7","Type":"ContainerDied","Data":"feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711"} Dec 02 11:44:20 crc kubenswrapper[4813]: I1202 11:44:20.508631 4813 scope.go:117] "RemoveContainer" containerID="feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711" Dec 02 11:44:20 crc kubenswrapper[4813]: I1202 11:44:20.995301 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p75mk_must-gather-rzmzn_3bd9d075-2ace-47b4-b022-90ad7fcc1aa7/gather/0.log" Dec 02 11:44:21 crc kubenswrapper[4813]: I1202 11:44:21.547029 4813 generic.go:334] "Generic (PLEG): container finished" podID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerID="8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911" exitCode=0 Dec 02 11:44:21 crc kubenswrapper[4813]: I1202 11:44:21.547177 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh8lk" event={"ID":"1724807b-ca59-43aa-9810-1f4a9a1bd02f","Type":"ContainerDied","Data":"8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911"} Dec 02 11:44:22 crc kubenswrapper[4813]: I1202 11:44:22.562495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh8lk" event={"ID":"1724807b-ca59-43aa-9810-1f4a9a1bd02f","Type":"ContainerStarted","Data":"47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378"} Dec 02 11:44:22 crc kubenswrapper[4813]: I1202 11:44:22.603493 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qh8lk" podStartSLOduration=3.084996587 podStartE2EDuration="5.60346293s" podCreationTimestamp="2025-12-02 11:44:17 +0000 UTC" firstStartedPulling="2025-12-02 11:44:19.490683378 +0000 UTC m=+5783.685857720" lastFinishedPulling="2025-12-02 11:44:22.009149761 +0000 UTC m=+5786.204324063" observedRunningTime="2025-12-02 11:44:22.582532966 +0000 UTC m=+5786.777707308" watchObservedRunningTime="2025-12-02 11:44:22.60346293 +0000 UTC m=+5786.798637272" Dec 02 11:44:25 crc kubenswrapper[4813]: I1202 11:44:25.068466 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:44:25 crc kubenswrapper[4813]: E1202 11:44:25.069720 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:44:27 crc kubenswrapper[4813]: I1202 11:44:27.724655 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:27 crc kubenswrapper[4813]: I1202 11:44:27.724983 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:27 crc kubenswrapper[4813]: I1202 11:44:27.822312 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:28 crc kubenswrapper[4813]: I1202 11:44:28.709572 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:28 crc kubenswrapper[4813]: I1202 11:44:28.778821 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qh8lk"] Dec 02 11:44:29 crc kubenswrapper[4813]: I1202 11:44:29.673616 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-p75mk/must-gather-rzmzn"] Dec 02 11:44:29 crc kubenswrapper[4813]: I1202 11:44:29.674338 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-p75mk/must-gather-rzmzn" podUID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerName="copy" containerID="cri-o://a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8" gracePeriod=2 Dec 02 11:44:29 crc kubenswrapper[4813]: I1202 11:44:29.689346 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-p75mk/must-gather-rzmzn"] Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.195891 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p75mk_must-gather-rzmzn_3bd9d075-2ace-47b4-b022-90ad7fcc1aa7/copy/0.log" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.197034 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.348722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cgtq\" (UniqueName: \"kubernetes.io/projected/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-kube-api-access-5cgtq\") pod \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\" (UID: \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\") " Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.348890 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-must-gather-output\") pod \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\" (UID: \"3bd9d075-2ace-47b4-b022-90ad7fcc1aa7\") " Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.357818 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-kube-api-access-5cgtq" (OuterVolumeSpecName: "kube-api-access-5cgtq") pod "3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" (UID: "3bd9d075-2ace-47b4-b022-90ad7fcc1aa7"). InnerVolumeSpecName "kube-api-access-5cgtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.452707 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cgtq\" (UniqueName: \"kubernetes.io/projected/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-kube-api-access-5cgtq\") on node \"crc\" DevicePath \"\"" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.527120 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" (UID: "3bd9d075-2ace-47b4-b022-90ad7fcc1aa7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.554305 4813 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.654285 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-p75mk_must-gather-rzmzn_3bd9d075-2ace-47b4-b022-90ad7fcc1aa7/copy/0.log" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.658466 4813 generic.go:334] "Generic (PLEG): container finished" podID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerID="a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8" exitCode=143 Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.658589 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p75mk/must-gather-rzmzn" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.658657 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qh8lk" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerName="registry-server" containerID="cri-o://47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378" gracePeriod=2 Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.658611 4813 scope.go:117] "RemoveContainer" containerID="a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.692207 4813 scope.go:117] "RemoveContainer" containerID="feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.770502 4813 scope.go:117] "RemoveContainer" containerID="a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8" Dec 02 11:44:30 crc kubenswrapper[4813]: E1202 11:44:30.771064 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8\": container with ID starting with a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8 not found: ID does not exist" containerID="a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.771135 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8"} err="failed to get container status \"a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8\": rpc error: code = NotFound desc = could not find container \"a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8\": container with ID starting with a6547545d9212e7bbeb63bf4420ace63fcb213fcf2f1cb4b0f749fd3eb1f2ea8 not found: ID does not exist" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.771168 4813 scope.go:117] "RemoveContainer" containerID="feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711" Dec 02 11:44:30 crc kubenswrapper[4813]: E1202 11:44:30.771460 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711\": container with ID starting with feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711 not found: ID does not exist" containerID="feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711" Dec 02 11:44:30 crc kubenswrapper[4813]: I1202 11:44:30.771500 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711"} err="failed to get container status \"feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711\": rpc error: code = NotFound desc = could not find container \"feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711\": container with ID starting with feb00c0f4f05774bb801db401d82051c3b5578973817201e4b5ee4ae226f2711 not found: ID does not exist" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.281561 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.373904 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-utilities\") pod \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.374141 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-catalog-content\") pod \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.374298 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-799pq\" (UniqueName: \"kubernetes.io/projected/1724807b-ca59-43aa-9810-1f4a9a1bd02f-kube-api-access-799pq\") pod \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\" (UID: \"1724807b-ca59-43aa-9810-1f4a9a1bd02f\") " Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.375211 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-utilities" (OuterVolumeSpecName: "utilities") pod "1724807b-ca59-43aa-9810-1f4a9a1bd02f" (UID: "1724807b-ca59-43aa-9810-1f4a9a1bd02f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.386825 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1724807b-ca59-43aa-9810-1f4a9a1bd02f-kube-api-access-799pq" (OuterVolumeSpecName: "kube-api-access-799pq") pod "1724807b-ca59-43aa-9810-1f4a9a1bd02f" (UID: "1724807b-ca59-43aa-9810-1f4a9a1bd02f"). InnerVolumeSpecName "kube-api-access-799pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.463306 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1724807b-ca59-43aa-9810-1f4a9a1bd02f" (UID: "1724807b-ca59-43aa-9810-1f4a9a1bd02f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.477432 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-799pq\" (UniqueName: \"kubernetes.io/projected/1724807b-ca59-43aa-9810-1f4a9a1bd02f-kube-api-access-799pq\") on node \"crc\" DevicePath \"\"" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.477488 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.477503 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1724807b-ca59-43aa-9810-1f4a9a1bd02f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.677526 4813 generic.go:334] "Generic (PLEG): container finished" podID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerID="47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378" exitCode=0 Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.677611 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh8lk" event={"ID":"1724807b-ca59-43aa-9810-1f4a9a1bd02f","Type":"ContainerDied","Data":"47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378"} Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.677646 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qh8lk" event={"ID":"1724807b-ca59-43aa-9810-1f4a9a1bd02f","Type":"ContainerDied","Data":"20a36e3710708fa94ca356cde1addd14d98a57d0088913b0364a0953a6a7f9dd"} Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.677668 4813 scope.go:117] "RemoveContainer" containerID="47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.677827 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qh8lk" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.710461 4813 scope.go:117] "RemoveContainer" containerID="8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.735282 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qh8lk"] Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.757960 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qh8lk"] Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.767179 4813 scope.go:117] "RemoveContainer" containerID="2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.804625 4813 scope.go:117] "RemoveContainer" containerID="47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378" Dec 02 11:44:31 crc kubenswrapper[4813]: E1202 11:44:31.805450 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378\": container with ID starting with 47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378 not found: ID does not exist" containerID="47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.805509 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378"} err="failed to get container status \"47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378\": rpc error: code = NotFound desc = could not find container \"47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378\": container with ID starting with 47bfe01356e8998beafc72d785912465eff2d017a371a8110c2583b8c93d4378 not found: ID does not exist" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.805548 4813 scope.go:117] "RemoveContainer" containerID="8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911" Dec 02 11:44:31 crc kubenswrapper[4813]: E1202 11:44:31.805966 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911\": container with ID starting with 8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911 not found: ID does not exist" containerID="8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.806085 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911"} err="failed to get container status \"8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911\": rpc error: code = NotFound desc = could not find container \"8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911\": container with ID starting with 8f42aeeab8f176ad8daf538b3479e2b764b08aa3e0b15b4afea66187a9e03911 not found: ID does not exist" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.806115 4813 scope.go:117] "RemoveContainer" containerID="2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3" Dec 02 11:44:31 crc kubenswrapper[4813]: E1202 11:44:31.806631 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3\": container with ID starting with 2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3 not found: ID does not exist" containerID="2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3" Dec 02 11:44:31 crc kubenswrapper[4813]: I1202 11:44:31.806682 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3"} err="failed to get container status \"2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3\": rpc error: code = NotFound desc = could not find container \"2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3\": container with ID starting with 2787fd7db27565e8a12418ffae44aec4bee8526eaf3437a81869207263d441d3 not found: ID does not exist" Dec 02 11:44:32 crc kubenswrapper[4813]: I1202 11:44:32.087549 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" path="/var/lib/kubelet/pods/1724807b-ca59-43aa-9810-1f4a9a1bd02f/volumes" Dec 02 11:44:32 crc kubenswrapper[4813]: I1202 11:44:32.089318 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" path="/var/lib/kubelet/pods/3bd9d075-2ace-47b4-b022-90ad7fcc1aa7/volumes" Dec 02 11:44:38 crc kubenswrapper[4813]: I1202 11:44:38.068437 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:44:38 crc kubenswrapper[4813]: E1202 11:44:38.069158 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:44:53 crc kubenswrapper[4813]: I1202 11:44:53.068496 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:44:53 crc kubenswrapper[4813]: E1202 11:44:53.069300 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.183209 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs"] Dec 02 11:45:00 crc kubenswrapper[4813]: E1202 11:45:00.186017 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerName="copy" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.186033 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerName="copy" Dec 02 11:45:00 crc kubenswrapper[4813]: E1202 11:45:00.186062 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerName="registry-server" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.186092 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerName="registry-server" Dec 02 11:45:00 crc kubenswrapper[4813]: E1202 11:45:00.186109 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerName="extract-content" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.186116 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerName="extract-content" Dec 02 11:45:00 crc kubenswrapper[4813]: E1202 11:45:00.186152 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerName="gather" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.186161 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerName="gather" Dec 02 11:45:00 crc kubenswrapper[4813]: E1202 11:45:00.186181 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerName="extract-utilities" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.186190 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerName="extract-utilities" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.188109 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerName="copy" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.188143 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1724807b-ca59-43aa-9810-1f4a9a1bd02f" containerName="registry-server" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.188155 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd9d075-2ace-47b4-b022-90ad7fcc1aa7" containerName="gather" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.194912 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.198846 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.199716 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.234345 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs"] Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.325183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8950fe4d-ae4c-45e5-9d81-2c267444a8df-config-volume\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.325288 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8950fe4d-ae4c-45e5-9d81-2c267444a8df-secret-volume\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.325316 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8dz\" (UniqueName: \"kubernetes.io/projected/8950fe4d-ae4c-45e5-9d81-2c267444a8df-kube-api-access-xt8dz\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.427432 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8950fe4d-ae4c-45e5-9d81-2c267444a8df-config-volume\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.427526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8950fe4d-ae4c-45e5-9d81-2c267444a8df-secret-volume\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.427563 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8dz\" (UniqueName: \"kubernetes.io/projected/8950fe4d-ae4c-45e5-9d81-2c267444a8df-kube-api-access-xt8dz\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.429148 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8950fe4d-ae4c-45e5-9d81-2c267444a8df-config-volume\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.439317 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8950fe4d-ae4c-45e5-9d81-2c267444a8df-secret-volume\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.457513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8dz\" (UniqueName: \"kubernetes.io/projected/8950fe4d-ae4c-45e5-9d81-2c267444a8df-kube-api-access-xt8dz\") pod \"collect-profiles-29411265-snpbs\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.526943 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:00 crc kubenswrapper[4813]: I1202 11:45:00.786769 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs"] Dec 02 11:45:01 crc kubenswrapper[4813]: I1202 11:45:01.073116 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" event={"ID":"8950fe4d-ae4c-45e5-9d81-2c267444a8df","Type":"ContainerStarted","Data":"6fb495600061035686fc6870329d537c29a0b9001c418dd85867657ff20cd072"} Dec 02 11:45:01 crc kubenswrapper[4813]: I1202 11:45:01.073422 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" event={"ID":"8950fe4d-ae4c-45e5-9d81-2c267444a8df","Type":"ContainerStarted","Data":"df5195dbeb77796954c1e2303f1ca47b9ac01aa7667d016c245c1a048f0547b1"} Dec 02 11:45:01 crc kubenswrapper[4813]: I1202 11:45:01.094634 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" podStartSLOduration=1.094613622 podStartE2EDuration="1.094613622s" podCreationTimestamp="2025-12-02 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 11:45:01.089570419 +0000 UTC m=+5825.284744721" watchObservedRunningTime="2025-12-02 11:45:01.094613622 +0000 UTC m=+5825.289787924" Dec 02 11:45:02 crc kubenswrapper[4813]: I1202 11:45:02.082276 4813 generic.go:334] "Generic (PLEG): container finished" podID="8950fe4d-ae4c-45e5-9d81-2c267444a8df" containerID="6fb495600061035686fc6870329d537c29a0b9001c418dd85867657ff20cd072" exitCode=0 Dec 02 11:45:02 crc kubenswrapper[4813]: I1202 11:45:02.082335 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" event={"ID":"8950fe4d-ae4c-45e5-9d81-2c267444a8df","Type":"ContainerDied","Data":"6fb495600061035686fc6870329d537c29a0b9001c418dd85867657ff20cd072"} Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.535937 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.704130 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt8dz\" (UniqueName: \"kubernetes.io/projected/8950fe4d-ae4c-45e5-9d81-2c267444a8df-kube-api-access-xt8dz\") pod \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.704461 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8950fe4d-ae4c-45e5-9d81-2c267444a8df-secret-volume\") pod \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.704609 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8950fe4d-ae4c-45e5-9d81-2c267444a8df-config-volume\") pod \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\" (UID: \"8950fe4d-ae4c-45e5-9d81-2c267444a8df\") " Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.705544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8950fe4d-ae4c-45e5-9d81-2c267444a8df-config-volume" (OuterVolumeSpecName: "config-volume") pod "8950fe4d-ae4c-45e5-9d81-2c267444a8df" (UID: "8950fe4d-ae4c-45e5-9d81-2c267444a8df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.711019 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8950fe4d-ae4c-45e5-9d81-2c267444a8df-kube-api-access-xt8dz" (OuterVolumeSpecName: "kube-api-access-xt8dz") pod "8950fe4d-ae4c-45e5-9d81-2c267444a8df" (UID: "8950fe4d-ae4c-45e5-9d81-2c267444a8df"). InnerVolumeSpecName "kube-api-access-xt8dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.711134 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8950fe4d-ae4c-45e5-9d81-2c267444a8df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8950fe4d-ae4c-45e5-9d81-2c267444a8df" (UID: "8950fe4d-ae4c-45e5-9d81-2c267444a8df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.807899 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8950fe4d-ae4c-45e5-9d81-2c267444a8df-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.808448 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt8dz\" (UniqueName: \"kubernetes.io/projected/8950fe4d-ae4c-45e5-9d81-2c267444a8df-kube-api-access-xt8dz\") on node \"crc\" DevicePath \"\"" Dec 02 11:45:03 crc kubenswrapper[4813]: I1202 11:45:03.808479 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8950fe4d-ae4c-45e5-9d81-2c267444a8df-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 11:45:04 crc kubenswrapper[4813]: I1202 11:45:04.150556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" event={"ID":"8950fe4d-ae4c-45e5-9d81-2c267444a8df","Type":"ContainerDied","Data":"df5195dbeb77796954c1e2303f1ca47b9ac01aa7667d016c245c1a048f0547b1"} Dec 02 11:45:04 crc kubenswrapper[4813]: I1202 11:45:04.150861 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df5195dbeb77796954c1e2303f1ca47b9ac01aa7667d016c245c1a048f0547b1" Dec 02 11:45:04 crc kubenswrapper[4813]: I1202 11:45:04.150633 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411265-snpbs" Dec 02 11:45:04 crc kubenswrapper[4813]: I1202 11:45:04.646184 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7"] Dec 02 11:45:04 crc kubenswrapper[4813]: I1202 11:45:04.659473 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411220-7hhl7"] Dec 02 11:45:06 crc kubenswrapper[4813]: I1202 11:45:06.088989 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507629a1-b497-4708-9533-1fd8c258584c" path="/var/lib/kubelet/pods/507629a1-b497-4708-9533-1fd8c258584c/volumes" Dec 02 11:45:07 crc kubenswrapper[4813]: I1202 11:45:07.068345 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:45:07 crc kubenswrapper[4813]: E1202 11:45:07.068992 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:45:07 crc kubenswrapper[4813]: I1202 11:45:07.707317 4813 scope.go:117] "RemoveContainer" containerID="f69e4ed7e4fab7aba24de312a55acf452ff946fc9dac89029f269bc18f3c97c4" Dec 02 11:45:07 crc kubenswrapper[4813]: I1202 11:45:07.811272 4813 scope.go:117] "RemoveContainer" containerID="0e54cfd4750cfde723a589c46333703f2e0f0da2c99bb2dd1184895f2c1403c6" Dec 02 11:45:21 crc kubenswrapper[4813]: I1202 11:45:21.068153 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:45:21 crc kubenswrapper[4813]: E1202 11:45:21.069251 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:45:34 crc kubenswrapper[4813]: I1202 11:45:34.069471 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:45:34 crc kubenswrapper[4813]: E1202 11:45:34.070503 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:45:49 crc kubenswrapper[4813]: I1202 11:45:49.068046 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:45:49 crc kubenswrapper[4813]: E1202 11:45:49.069018 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:46:01 crc kubenswrapper[4813]: I1202 11:46:01.068042 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:46:01 crc kubenswrapper[4813]: E1202 11:46:01.069205 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:46:08 crc kubenswrapper[4813]: I1202 11:46:08.008792 4813 scope.go:117] "RemoveContainer" containerID="f79c5177ccb1ac24473c8f7d5efd757173da2a912b6c13e6f9a50de39d61f8d5" Dec 02 11:46:15 crc kubenswrapper[4813]: I1202 11:46:15.068535 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:46:15 crc kubenswrapper[4813]: E1202 11:46:15.069495 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:46:26 crc kubenswrapper[4813]: I1202 11:46:26.080681 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:46:26 crc kubenswrapper[4813]: E1202 11:46:26.081337 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:46:39 crc kubenswrapper[4813]: I1202 11:46:39.068548 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:46:39 crc kubenswrapper[4813]: E1202 11:46:39.069795 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:46:52 crc kubenswrapper[4813]: I1202 11:46:52.069162 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:46:52 crc kubenswrapper[4813]: E1202 11:46:52.070306 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:47:03 crc kubenswrapper[4813]: I1202 11:47:03.068266 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:47:03 crc kubenswrapper[4813]: E1202 11:47:03.069284 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4p89g_openshift-machine-config-operator(db121737-190f-4b43-9d79-e96e2dd76080)\"" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" podUID="db121737-190f-4b43-9d79-e96e2dd76080" Dec 02 11:47:17 crc kubenswrapper[4813]: I1202 11:47:17.068601 4813 scope.go:117] "RemoveContainer" containerID="6fe14fe495b16de2ec7f5a13b39f0ec09108232756715970c7e0b5d9385ad034" Dec 02 11:47:17 crc kubenswrapper[4813]: I1202 11:47:17.689436 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4p89g" event={"ID":"db121737-190f-4b43-9d79-e96e2dd76080","Type":"ContainerStarted","Data":"1670ae5c891c2b1397261a1edd6a92c99b16c93fe81084a87e6b8e4c3c3e3ef2"} Dec 02 11:47:58 crc kubenswrapper[4813]: I1202 11:47:58.792414 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cnxwd"] Dec 02 11:47:58 crc kubenswrapper[4813]: E1202 11:47:58.793390 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8950fe4d-ae4c-45e5-9d81-2c267444a8df" containerName="collect-profiles" Dec 02 11:47:58 crc kubenswrapper[4813]: I1202 11:47:58.793404 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8950fe4d-ae4c-45e5-9d81-2c267444a8df" containerName="collect-profiles" Dec 02 11:47:58 crc kubenswrapper[4813]: I1202 11:47:58.793632 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8950fe4d-ae4c-45e5-9d81-2c267444a8df" containerName="collect-profiles" Dec 02 11:47:58 crc kubenswrapper[4813]: I1202 11:47:58.795254 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:58 crc kubenswrapper[4813]: I1202 11:47:58.804639 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnxwd"] Dec 02 11:47:58 crc kubenswrapper[4813]: I1202 11:47:58.916793 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8h8g\" (UniqueName: \"kubernetes.io/projected/8d2edce8-9e19-4d83-89d2-0219031abf57-kube-api-access-m8h8g\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:58 crc kubenswrapper[4813]: I1202 11:47:58.917123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-catalog-content\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:58 crc kubenswrapper[4813]: I1202 11:47:58.917225 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-utilities\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:59 crc kubenswrapper[4813]: I1202 11:47:59.018816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-utilities\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:59 crc kubenswrapper[4813]: I1202 11:47:59.018996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8h8g\" (UniqueName: \"kubernetes.io/projected/8d2edce8-9e19-4d83-89d2-0219031abf57-kube-api-access-m8h8g\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:59 crc kubenswrapper[4813]: I1202 11:47:59.019045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-catalog-content\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:59 crc kubenswrapper[4813]: I1202 11:47:59.019433 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-utilities\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:59 crc kubenswrapper[4813]: I1202 11:47:59.019452 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-catalog-content\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:59 crc kubenswrapper[4813]: I1202 11:47:59.038340 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8h8g\" (UniqueName: \"kubernetes.io/projected/8d2edce8-9e19-4d83-89d2-0219031abf57-kube-api-access-m8h8g\") pod \"redhat-marketplace-cnxwd\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:59 crc kubenswrapper[4813]: I1202 11:47:59.125954 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:47:59 crc kubenswrapper[4813]: I1202 11:47:59.619912 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnxwd"] Dec 02 11:48:00 crc kubenswrapper[4813]: I1202 11:48:00.128000 4813 generic.go:334] "Generic (PLEG): container finished" podID="8d2edce8-9e19-4d83-89d2-0219031abf57" containerID="601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9" exitCode=0 Dec 02 11:48:00 crc kubenswrapper[4813]: I1202 11:48:00.128052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnxwd" event={"ID":"8d2edce8-9e19-4d83-89d2-0219031abf57","Type":"ContainerDied","Data":"601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9"} Dec 02 11:48:00 crc kubenswrapper[4813]: I1202 11:48:00.128369 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnxwd" event={"ID":"8d2edce8-9e19-4d83-89d2-0219031abf57","Type":"ContainerStarted","Data":"c70b63310ffa8f018ec796b6ae081af5c20ca292879ef256e621c75ee517a159"} Dec 02 11:48:00 crc kubenswrapper[4813]: I1202 11:48:00.133330 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 11:48:01 crc kubenswrapper[4813]: I1202 11:48:01.139281 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnxwd" event={"ID":"8d2edce8-9e19-4d83-89d2-0219031abf57","Type":"ContainerStarted","Data":"d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c"} Dec 02 11:48:02 crc kubenswrapper[4813]: I1202 11:48:02.155885 4813 generic.go:334] "Generic (PLEG): container finished" podID="8d2edce8-9e19-4d83-89d2-0219031abf57" containerID="d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c" exitCode=0 Dec 02 11:48:02 crc kubenswrapper[4813]: I1202 11:48:02.155999 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnxwd" event={"ID":"8d2edce8-9e19-4d83-89d2-0219031abf57","Type":"ContainerDied","Data":"d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c"} Dec 02 11:48:03 crc kubenswrapper[4813]: I1202 11:48:03.167204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnxwd" event={"ID":"8d2edce8-9e19-4d83-89d2-0219031abf57","Type":"ContainerStarted","Data":"dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6"} Dec 02 11:48:09 crc kubenswrapper[4813]: I1202 11:48:09.126214 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:48:09 crc kubenswrapper[4813]: I1202 11:48:09.127773 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:48:09 crc kubenswrapper[4813]: I1202 11:48:09.215033 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:48:09 crc kubenswrapper[4813]: I1202 11:48:09.255004 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cnxwd" podStartSLOduration=8.574108514 podStartE2EDuration="11.254982505s" podCreationTimestamp="2025-12-02 11:47:58 +0000 UTC" firstStartedPulling="2025-12-02 11:48:00.132711727 +0000 UTC m=+6004.327886069" lastFinishedPulling="2025-12-02 11:48:02.813585758 +0000 UTC m=+6007.008760060" observedRunningTime="2025-12-02 11:48:03.189360077 +0000 UTC m=+6007.384534379" watchObservedRunningTime="2025-12-02 11:48:09.254982505 +0000 UTC m=+6013.450156807" Dec 02 11:48:09 crc kubenswrapper[4813]: I1202 11:48:09.325651 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.209513 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnxwd"] Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.210601 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cnxwd" podUID="8d2edce8-9e19-4d83-89d2-0219031abf57" containerName="registry-server" containerID="cri-o://dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6" gracePeriod=2 Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.679489 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.704557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-catalog-content\") pod \"8d2edce8-9e19-4d83-89d2-0219031abf57\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.704820 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-utilities\") pod \"8d2edce8-9e19-4d83-89d2-0219031abf57\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.705108 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8h8g\" (UniqueName: \"kubernetes.io/projected/8d2edce8-9e19-4d83-89d2-0219031abf57-kube-api-access-m8h8g\") pod \"8d2edce8-9e19-4d83-89d2-0219031abf57\" (UID: \"8d2edce8-9e19-4d83-89d2-0219031abf57\") " Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.707459 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-utilities" (OuterVolumeSpecName: "utilities") pod "8d2edce8-9e19-4d83-89d2-0219031abf57" (UID: "8d2edce8-9e19-4d83-89d2-0219031abf57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.712450 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2edce8-9e19-4d83-89d2-0219031abf57-kube-api-access-m8h8g" (OuterVolumeSpecName: "kube-api-access-m8h8g") pod "8d2edce8-9e19-4d83-89d2-0219031abf57" (UID: "8d2edce8-9e19-4d83-89d2-0219031abf57"). InnerVolumeSpecName "kube-api-access-m8h8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.734395 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d2edce8-9e19-4d83-89d2-0219031abf57" (UID: "8d2edce8-9e19-4d83-89d2-0219031abf57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.807420 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8h8g\" (UniqueName: \"kubernetes.io/projected/8d2edce8-9e19-4d83-89d2-0219031abf57-kube-api-access-m8h8g\") on node \"crc\" DevicePath \"\"" Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.807453 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 11:48:12 crc kubenswrapper[4813]: I1202 11:48:12.807463 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2edce8-9e19-4d83-89d2-0219031abf57-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.306355 4813 generic.go:334] "Generic (PLEG): container finished" podID="8d2edce8-9e19-4d83-89d2-0219031abf57" containerID="dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6" exitCode=0 Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.306591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnxwd" event={"ID":"8d2edce8-9e19-4d83-89d2-0219031abf57","Type":"ContainerDied","Data":"dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6"} Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.306685 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnxwd" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.306713 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnxwd" event={"ID":"8d2edce8-9e19-4d83-89d2-0219031abf57","Type":"ContainerDied","Data":"c70b63310ffa8f018ec796b6ae081af5c20ca292879ef256e621c75ee517a159"} Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.306743 4813 scope.go:117] "RemoveContainer" containerID="dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.334895 4813 scope.go:117] "RemoveContainer" containerID="d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.337931 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnxwd"] Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.345411 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnxwd"] Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.364720 4813 scope.go:117] "RemoveContainer" containerID="601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.397588 4813 scope.go:117] "RemoveContainer" containerID="dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6" Dec 02 11:48:13 crc kubenswrapper[4813]: E1202 11:48:13.398056 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6\": container with ID starting with dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6 not found: ID does not exist" containerID="dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.398171 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6"} err="failed to get container status \"dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6\": rpc error: code = NotFound desc = could not find container \"dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6\": container with ID starting with dc95eedf58ddc9a3c81c321c945fff4e110cf35150762780614929274c1f6bf6 not found: ID does not exist" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.398205 4813 scope.go:117] "RemoveContainer" containerID="d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c" Dec 02 11:48:13 crc kubenswrapper[4813]: E1202 11:48:13.398515 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c\": container with ID starting with d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c not found: ID does not exist" containerID="d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.398559 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c"} err="failed to get container status \"d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c\": rpc error: code = NotFound desc = could not find container \"d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c\": container with ID starting with d9fb409a7b2153216e38f349f32cce1c9828fb40c6081b6421cee70aee73e23c not found: ID does not exist" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.398585 4813 scope.go:117] "RemoveContainer" containerID="601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9" Dec 02 11:48:13 crc kubenswrapper[4813]: E1202 11:48:13.399047 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9\": container with ID starting with 601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9 not found: ID does not exist" containerID="601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9" Dec 02 11:48:13 crc kubenswrapper[4813]: I1202 11:48:13.399162 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9"} err="failed to get container status \"601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9\": rpc error: code = NotFound desc = could not find container \"601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9\": container with ID starting with 601bdc70eb5b0cabe3884cb89d6b388449a5bb4be7bd2b1841470653eb2340e9 not found: ID does not exist" Dec 02 11:48:14 crc kubenswrapper[4813]: I1202 11:48:14.087622 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2edce8-9e19-4d83-89d2-0219031abf57" path="/var/lib/kubelet/pods/8d2edce8-9e19-4d83-89d2-0219031abf57/volumes"